00:00:00.000 Started by upstream project "autotest-per-patch" build number 127191 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24328 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:06.609 The recommended git tool is: git 00:00:06.609 using credential 00000000-0000-0000-0000-000000000002 00:00:06.611 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.623 Fetching changes from the remote Git repository 00:00:06.625 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.636 Using shallow fetch with depth 1 00:00:06.636 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.636 > git --version # timeout=10 00:00:06.647 > git --version # 'git version 2.39.2' 00:00:06.647 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.657 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.657 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/26 # timeout=5 00:00:12.093 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.103 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.114 Checking out Revision 124d5bb683991a063807d96399433650600a89c8 (FETCH_HEAD) 00:00:12.114 > git config core.sparsecheckout # timeout=10 00:00:12.125 > git read-tree -mu HEAD # timeout=10 00:00:12.147 > git checkout -f 124d5bb683991a063807d96399433650600a89c8 # timeout=5 00:00:12.166 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch and nightly" 00:00:12.166 > git rev-list --no-walk bb4bbb76f2437bc8cff7e7e4a466bce7165cd7f0 # timeout=10 00:00:12.255 [Pipeline] Start of Pipeline 00:00:12.268 [Pipeline] library 00:00:12.269 Loading library shm_lib@master 00:00:12.270 Library shm_lib@master is cached. Copying from home. 00:00:12.287 [Pipeline] node 00:00:12.299 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:00:12.301 [Pipeline] { 00:00:12.312 [Pipeline] catchError 00:00:12.313 [Pipeline] { 00:00:12.326 [Pipeline] wrap 00:00:12.335 [Pipeline] { 00:00:12.343 [Pipeline] stage 00:00:12.345 [Pipeline] { (Prologue) 00:00:12.364 [Pipeline] echo 00:00:12.366 Node: VM-host-SM0 00:00:12.372 [Pipeline] cleanWs 00:00:12.380 [WS-CLEANUP] Deleting project workspace... 00:00:12.380 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.386 [WS-CLEANUP] done 00:00:12.570 [Pipeline] setCustomBuildProperty 00:00:12.631 [Pipeline] httpRequest 00:00:12.662 [Pipeline] echo 00:00:12.663 Sorcerer 10.211.164.101 is alive 00:00:12.670 [Pipeline] httpRequest 00:00:12.673 HttpMethod: GET 00:00:12.674 URL: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:12.674 Sending request to url: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:12.680 Response Code: HTTP/1.1 200 OK 00:00:12.681 Success: Status code 200 is in the accepted range: 200,404 00:00:12.682 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:24.701 [Pipeline] sh 00:00:24.981 + tar --no-same-owner -xf jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:24.997 [Pipeline] httpRequest 00:00:25.019 [Pipeline] echo 00:00:25.022 Sorcerer 10.211.164.101 is alive 00:00:25.032 [Pipeline] httpRequest 00:00:25.036 HttpMethod: GET 00:00:25.037 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:25.038 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:25.041 Response Code: HTTP/1.1 200 OK 00:00:25.042 Success: Status code 200 is in the accepted range: 200,404 00:00:25.043 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:03:13.831 [Pipeline] sh 00:03:14.108 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:03:17.398 [Pipeline] sh 00:03:17.674 + git -C spdk log --oneline -n5 00:03:17.674 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:17.674 fc2398dfa raid: clear base bdev configure_cb after executing 00:03:17.674 5558f3f50 raid: complete bdev_raid_create after sb is written 00:03:17.674 d005e023b raid: fix empty slot not updated in sb after resize 00:03:17.674 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:03:17.691 [Pipeline] writeFile 00:03:17.704 [Pipeline] sh 00:03:17.978 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:17.989 [Pipeline] sh 00:03:18.265 + cat autorun-spdk.conf 00:03:18.266 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:18.266 SPDK_TEST_NVME=1 00:03:18.266 SPDK_TEST_FTL=1 00:03:18.266 SPDK_TEST_ISAL=1 00:03:18.266 SPDK_RUN_ASAN=1 00:03:18.266 SPDK_RUN_UBSAN=1 00:03:18.266 SPDK_TEST_XNVME=1 00:03:18.266 SPDK_TEST_NVME_FDP=1 00:03:18.266 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:18.271 RUN_NIGHTLY=0 00:03:18.273 [Pipeline] } 00:03:18.289 [Pipeline] // stage 00:03:18.305 [Pipeline] stage 00:03:18.306 [Pipeline] { (Run VM) 00:03:18.320 [Pipeline] sh 00:03:18.598 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:18.599 + echo 'Start stage prepare_nvme.sh' 00:03:18.599 Start stage prepare_nvme.sh 00:03:18.599 + [[ -n 6 ]] 00:03:18.599 + disk_prefix=ex6 00:03:18.599 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:18.599 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:18.599 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:18.599 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:18.599 ++ SPDK_TEST_NVME=1 00:03:18.599 ++ SPDK_TEST_FTL=1 00:03:18.599 ++ SPDK_TEST_ISAL=1 00:03:18.599 ++ SPDK_RUN_ASAN=1 00:03:18.599 ++ SPDK_RUN_UBSAN=1 00:03:18.599 ++ SPDK_TEST_XNVME=1 00:03:18.599 ++ SPDK_TEST_NVME_FDP=1 00:03:18.599 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:18.599 ++ RUN_NIGHTLY=0 00:03:18.599 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:18.599 + nvme_files=() 00:03:18.599 + declare -A nvme_files 00:03:18.599 + backend_dir=/var/lib/libvirt/images/backends 00:03:18.599 + nvme_files['nvme.img']=5G 00:03:18.599 + nvme_files['nvme-cmb.img']=5G 00:03:18.599 + nvme_files['nvme-multi0.img']=4G 00:03:18.599 + nvme_files['nvme-multi1.img']=4G 00:03:18.599 + nvme_files['nvme-multi2.img']=4G 00:03:18.599 + nvme_files['nvme-openstack.img']=8G 00:03:18.599 + nvme_files['nvme-zns.img']=5G 00:03:18.599 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:18.599 + (( SPDK_TEST_FTL == 1 )) 00:03:18.599 + nvme_files["nvme-ftl.img"]=6G 00:03:18.599 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:18.599 + nvme_files["nvme-fdp.img"]=1G 00:03:18.599 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:18.599 + for nvme in "${!nvme_files[@]}" 00:03:18.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:18.599 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:18.599 + for nvme in "${!nvme_files[@]}" 00:03:18.599 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:03:18.857 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:18.857 + for nvme in "${!nvme_files[@]}" 00:03:18.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:18.857 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:18.857 + for nvme in "${!nvme_files[@]}" 00:03:18.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:18.857 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:18.857 + for nvme in "${!nvme_files[@]}" 00:03:18.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:18.857 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:18.857 + for nvme in "${!nvme_files[@]}" 00:03:18.857 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:19.115 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:19.373 + for nvme in "${!nvme_files[@]}" 00:03:19.373 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:19.373 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:19.373 + for nvme in "${!nvme_files[@]}" 00:03:19.373 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:03:19.632 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:19.632 + for nvme in "${!nvme_files[@]}" 00:03:19.632 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:19.632 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:19.632 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:19.632 + echo 'End stage prepare_nvme.sh' 00:03:19.632 End stage prepare_nvme.sh 00:03:19.642 [Pipeline] sh 00:03:19.920 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:19.920 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:03:19.920 00:03:19.920 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:19.920 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:19.920 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:19.920 HELP=0 00:03:19.920 DRY_RUN=0 00:03:19.920 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:03:19.920 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:19.920 NVME_AUTO_CREATE=0 00:03:19.920 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:03:19.920 NVME_CMB=,,,, 00:03:19.920 NVME_PMR=,,,, 00:03:19.920 NVME_ZNS=,,,, 00:03:19.920 NVME_MS=true,,,, 00:03:19.920 NVME_FDP=,,,on, 00:03:19.920 SPDK_VAGRANT_DISTRO=fedora38 00:03:19.920 SPDK_VAGRANT_VMCPU=10 00:03:19.920 SPDK_VAGRANT_VMRAM=12288 00:03:19.920 SPDK_VAGRANT_PROVIDER=libvirt 00:03:19.920 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:19.920 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:19.920 SPDK_OPENSTACK_NETWORK=0 00:03:19.920 VAGRANT_PACKAGE_BOX=0 00:03:19.920 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:19.920 FORCE_DISTRO=true 00:03:19.920 VAGRANT_BOX_VERSION= 00:03:19.920 EXTRA_VAGRANTFILES= 00:03:19.920 NIC_MODEL=e1000 00:03:19.920 00:03:19.920 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:03:19.920 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:23.200 Bringing machine 'default' up with 'libvirt' provider... 00:03:24.575 ==> default: Creating image (snapshot of base box volume). 00:03:24.575 ==> default: Creating domain with the following settings... 00:03:24.575 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721926456_610cbf8eecc300f81219 00:03:24.575 ==> default: -- Domain type: kvm 00:03:24.575 ==> default: -- Cpus: 10 00:03:24.575 ==> default: -- Feature: acpi 00:03:24.575 ==> default: -- Feature: apic 00:03:24.575 ==> default: -- Feature: pae 00:03:24.575 ==> default: -- Memory: 12288M 00:03:24.575 ==> default: -- Memory Backing: hugepages: 00:03:24.575 ==> default: -- Management MAC: 00:03:24.575 ==> default: -- Loader: 00:03:24.575 ==> default: -- Nvram: 00:03:24.575 ==> default: -- Base box: spdk/fedora38 00:03:24.575 ==> default: -- Storage pool: default 00:03:24.575 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721926456_610cbf8eecc300f81219.img (20G) 00:03:24.575 ==> default: -- Volume Cache: default 00:03:24.575 ==> default: -- Kernel: 00:03:24.575 ==> default: -- Initrd: 00:03:24.575 ==> default: -- Graphics Type: vnc 00:03:24.575 ==> default: -- Graphics Port: -1 00:03:24.575 ==> default: -- Graphics IP: 127.0.0.1 00:03:24.575 ==> default: -- Graphics Password: Not defined 00:03:24.575 ==> default: -- Video Type: cirrus 00:03:24.575 ==> default: -- Video VRAM: 9216 00:03:24.575 ==> default: -- Sound Type: 00:03:24.575 ==> default: -- Keymap: en-us 00:03:24.575 ==> default: -- TPM Path: 00:03:24.575 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:24.575 ==> default: -- Command line args: 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:24.575 ==> default: -> value=-drive, 00:03:24.575 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:24.575 ==> default: -> value=-device, 00:03:24.575 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:24.575 ==> default: Creating shared folders metadata... 00:03:24.575 ==> default: Starting domain. 00:03:26.477 ==> default: Waiting for domain to get an IP address... 00:03:44.620 ==> default: Waiting for SSH to become available... 00:03:44.620 ==> default: Configuring and enabling network interfaces... 00:03:48.807 default: SSH address: 192.168.121.5:22 00:03:48.807 default: SSH username: vagrant 00:03:48.807 default: SSH auth method: private key 00:03:50.705 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:58.833 ==> default: Mounting SSHFS shared folder... 00:03:59.769 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:03:59.769 ==> default: Checking Mount.. 00:04:01.171 ==> default: Folder Successfully Mounted! 00:04:01.171 ==> default: Running provisioner: file... 00:04:02.106 default: ~/.gitconfig => .gitconfig 00:04:02.365 00:04:02.365 SUCCESS! 00:04:02.365 00:04:02.365 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:04:02.365 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:02.365 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:04:02.365 00:04:02.374 [Pipeline] } 00:04:02.393 [Pipeline] // stage 00:04:02.402 [Pipeline] dir 00:04:02.403 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:04:02.405 [Pipeline] { 00:04:02.418 [Pipeline] catchError 00:04:02.419 [Pipeline] { 00:04:02.431 [Pipeline] sh 00:04:02.711 + vagrant ssh-config --host vagrant 00:04:02.711 + sed -ne /^Host/,$p 00:04:02.711 + tee ssh_conf 00:04:06.900 Host vagrant 00:04:06.900 HostName 192.168.121.5 00:04:06.900 User vagrant 00:04:06.900 Port 22 00:04:06.900 UserKnownHostsFile /dev/null 00:04:06.900 StrictHostKeyChecking no 00:04:06.900 PasswordAuthentication no 00:04:06.900 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:06.900 IdentitiesOnly yes 00:04:06.900 LogLevel FATAL 00:04:06.900 ForwardAgent yes 00:04:06.900 ForwardX11 yes 00:04:06.900 00:04:06.914 [Pipeline] withEnv 00:04:06.916 [Pipeline] { 00:04:06.931 [Pipeline] sh 00:04:07.210 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:07.210 source /etc/os-release 00:04:07.210 [[ -e /image.version ]] && img=$(< /image.version) 00:04:07.210 # Minimal, systemd-like check. 00:04:07.210 if [[ -e /.dockerenv ]]; then 00:04:07.210 # Clear garbage from the node's name: 00:04:07.210 # agt-er_autotest_547-896 -> autotest_547-896 00:04:07.210 # $HOSTNAME is the actual container id 00:04:07.211 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:07.211 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:07.211 # We can assume this is a mount from a host where container is running, 00:04:07.211 # so fetch its hostname to easily identify the target swarm worker. 00:04:07.211 container="$(< /etc/hostname) ($agent)" 00:04:07.211 else 00:04:07.211 # Fallback 00:04:07.211 container=$agent 00:04:07.211 fi 00:04:07.211 fi 00:04:07.211 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:07.211 00:04:07.478 [Pipeline] } 00:04:07.499 [Pipeline] // withEnv 00:04:07.509 [Pipeline] setCustomBuildProperty 00:04:07.526 [Pipeline] stage 00:04:07.528 [Pipeline] { (Tests) 00:04:07.549 [Pipeline] sh 00:04:07.829 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:08.100 [Pipeline] sh 00:04:08.380 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:08.652 [Pipeline] timeout 00:04:08.652 Timeout set to expire in 40 min 00:04:08.654 [Pipeline] { 00:04:08.670 [Pipeline] sh 00:04:08.944 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:09.510 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:04:09.588 [Pipeline] sh 00:04:09.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:10.144 [Pipeline] sh 00:04:10.423 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:10.698 [Pipeline] sh 00:04:10.976 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:11.234 ++ readlink -f spdk_repo 00:04:11.234 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:11.234 + [[ -n /home/vagrant/spdk_repo ]] 00:04:11.234 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:11.234 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:11.234 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:11.234 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:11.234 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:11.234 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:11.234 + cd /home/vagrant/spdk_repo 00:04:11.234 + source /etc/os-release 00:04:11.234 ++ NAME='Fedora Linux' 00:04:11.234 ++ VERSION='38 (Cloud Edition)' 00:04:11.235 ++ ID=fedora 00:04:11.235 ++ VERSION_ID=38 00:04:11.235 ++ VERSION_CODENAME= 00:04:11.235 ++ PLATFORM_ID=platform:f38 00:04:11.235 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:11.235 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:11.235 ++ LOGO=fedora-logo-icon 00:04:11.235 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:11.235 ++ HOME_URL=https://fedoraproject.org/ 00:04:11.235 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:11.235 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:11.235 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:11.235 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:11.235 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:11.235 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:11.235 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:11.235 ++ SUPPORT_END=2024-05-14 00:04:11.235 ++ VARIANT='Cloud Edition' 00:04:11.235 ++ VARIANT_ID=cloud 00:04:11.235 + uname -a 00:04:11.235 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:11.235 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.750 Hugepages 00:04:11.750 node hugesize free / total 00:04:11.750 node0 1048576kB 0 / 0 00:04:11.750 node0 2048kB 0 / 0 00:04:11.750 00:04:11.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.008 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:12.008 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:12.008 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:12.008 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:12.008 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:12.008 + rm -f /tmp/spdk-ld-path 00:04:12.008 + source autorun-spdk.conf 00:04:12.008 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:12.009 ++ SPDK_TEST_NVME=1 00:04:12.009 ++ SPDK_TEST_FTL=1 00:04:12.009 ++ SPDK_TEST_ISAL=1 00:04:12.009 ++ SPDK_RUN_ASAN=1 00:04:12.009 ++ SPDK_RUN_UBSAN=1 00:04:12.009 ++ SPDK_TEST_XNVME=1 00:04:12.009 ++ SPDK_TEST_NVME_FDP=1 00:04:12.009 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:12.009 ++ RUN_NIGHTLY=0 00:04:12.009 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:12.009 + [[ -n '' ]] 00:04:12.009 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:12.009 + for M in /var/spdk/build-*-manifest.txt 00:04:12.009 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:12.009 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:12.009 + for M in /var/spdk/build-*-manifest.txt 00:04:12.009 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:12.009 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:12.009 ++ uname 00:04:12.009 + [[ Linux == \L\i\n\u\x ]] 00:04:12.009 + sudo dmesg -T 00:04:12.009 + sudo dmesg --clear 00:04:12.009 + dmesg_pid=5205 00:04:12.009 + sudo dmesg -Tw 00:04:12.009 + [[ Fedora Linux == FreeBSD ]] 00:04:12.009 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:12.009 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:12.009 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:12.009 + [[ -x /usr/src/fio-static/fio ]] 00:04:12.009 + export FIO_BIN=/usr/src/fio-static/fio 00:04:12.009 + FIO_BIN=/usr/src/fio-static/fio 00:04:12.009 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:12.009 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:12.009 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:12.009 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:12.009 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:12.009 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:12.009 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:12.009 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:12.009 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.009 Test configuration: 00:04:12.009 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:12.009 SPDK_TEST_NVME=1 00:04:12.009 SPDK_TEST_FTL=1 00:04:12.009 SPDK_TEST_ISAL=1 00:04:12.009 SPDK_RUN_ASAN=1 00:04:12.009 SPDK_RUN_UBSAN=1 00:04:12.009 SPDK_TEST_XNVME=1 00:04:12.009 SPDK_TEST_NVME_FDP=1 00:04:12.009 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:12.267 RUN_NIGHTLY=0 16:55:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:12.267 16:55:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:12.267 16:55:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.267 16:55:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.267 16:55:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.267 16:55:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.267 16:55:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.267 16:55:04 -- paths/export.sh@5 -- $ export PATH 00:04:12.267 16:55:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.267 16:55:04 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:12.267 16:55:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:04:12.267 16:55:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721926504.XXXXXX 00:04:12.267 16:55:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721926504.ZBi8gE 00:04:12.267 16:55:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:04:12.267 16:55:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:04:12.267 16:55:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:12.267 16:55:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:12.267 16:55:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:12.267 16:55:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:04:12.267 16:55:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:04:12.267 16:55:04 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.267 16:55:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:12.267 16:55:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:04:12.267 16:55:04 -- pm/common@17 -- $ local monitor 00:04:12.267 16:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.267 16:55:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.267 16:55:04 -- pm/common@25 -- $ sleep 1 00:04:12.267 16:55:04 -- pm/common@21 -- $ date +%s 00:04:12.267 16:55:04 -- pm/common@21 -- $ date +%s 00:04:12.267 16:55:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721926504 00:04:12.267 16:55:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721926504 00:04:12.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721926504_collect-vmstat.pm.log 00:04:12.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721926504_collect-cpu-load.pm.log 00:04:13.202 16:55:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:04:13.202 16:55:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:13.202 16:55:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:13.202 16:55:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:13.202 16:55:05 -- spdk/autobuild.sh@16 -- $ date -u 00:04:13.202 Thu Jul 25 04:55:05 PM UTC 2024 00:04:13.202 16:55:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:13.202 v24.09-pre-321-g704257090 00:04:13.202 16:55:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:13.202 16:55:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:13.202 16:55:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:13.202 16:55:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:13.202 16:55:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.202 ************************************ 00:04:13.202 START TEST asan 00:04:13.202 ************************************ 00:04:13.202 using asan 00:04:13.202 16:55:05 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:04:13.202 00:04:13.202 real 0m0.000s 00:04:13.202 user 0m0.000s 00:04:13.202 sys 0m0.000s 00:04:13.202 16:55:05 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:13.202 16:55:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:13.202 ************************************ 00:04:13.202 END TEST asan 00:04:13.202 ************************************ 00:04:13.202 16:55:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:13.202 16:55:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:13.202 16:55:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:13.202 16:55:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:13.202 16:55:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.202 ************************************ 00:04:13.202 START TEST ubsan 00:04:13.202 ************************************ 00:04:13.202 using ubsan 00:04:13.202 16:55:05 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:13.202 00:04:13.202 real 0m0.000s 00:04:13.202 user 0m0.000s 00:04:13.202 sys 0m0.000s 00:04:13.202 16:55:05 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:13.202 16:55:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:13.202 ************************************ 00:04:13.202 END TEST ubsan 00:04:13.202 ************************************ 00:04:13.461 16:55:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:13.461 16:55:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:13.461 16:55:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:13.461 16:55:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:13.461 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:13.461 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:14.026 Using 'verbs' RDMA provider 00:04:27.171 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:42.036 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:42.036 Creating mk/config.mk...done. 00:04:42.036 Creating mk/cc.flags.mk...done. 00:04:42.036 Type 'make' to build. 00:04:42.036 16:55:32 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:42.036 16:55:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:42.036 16:55:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:42.036 16:55:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.036 ************************************ 00:04:42.036 START TEST make 00:04:42.036 ************************************ 00:04:42.036 16:55:32 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:42.036 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:42.036 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:42.036 meson setup builddir \ 00:04:42.036 -Dwith-libaio=enabled \ 00:04:42.036 -Dwith-liburing=enabled \ 00:04:42.036 -Dwith-libvfn=disabled \ 00:04:42.036 -Dwith-spdk=false && \ 00:04:42.036 meson compile -C builddir && \ 00:04:42.036 cd -) 00:04:42.036 make[1]: Nothing to be done for 'all'. 00:04:44.561 The Meson build system 00:04:44.561 Version: 1.3.1 00:04:44.561 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:44.561 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:44.561 Build type: native build 00:04:44.561 Project name: xnvme 00:04:44.561 Project version: 0.7.3 00:04:44.561 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:44.561 C linker for the host machine: cc ld.bfd 2.39-16 00:04:44.561 Host machine cpu family: x86_64 00:04:44.561 Host machine cpu: x86_64 00:04:44.561 Message: host_machine.system: linux 00:04:44.561 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:44.561 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:44.561 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:44.561 Run-time dependency threads found: YES 00:04:44.561 Has header "setupapi.h" : NO 00:04:44.561 Has header "linux/blkzoned.h" : YES 00:04:44.561 Has header "linux/blkzoned.h" : YES (cached) 00:04:44.561 Has header "libaio.h" : YES 00:04:44.561 Library aio found: YES 00:04:44.561 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:44.561 Run-time dependency liburing found: YES 2.2 00:04:44.561 Dependency libvfn skipped: feature with-libvfn disabled 00:04:44.561 Run-time dependency appleframeworks found: NO (tried framework) 00:04:44.561 Run-time dependency appleframeworks found: NO (tried framework) 00:04:44.561 Configuring xnvme_config.h using configuration 00:04:44.561 Configuring xnvme.spec using configuration 00:04:44.561 Run-time dependency bash-completion found: YES 2.11 00:04:44.561 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:44.561 Program cp found: YES (/usr/bin/cp) 00:04:44.561 Has header "winsock2.h" : NO 00:04:44.561 Has header "dbghelp.h" : NO 00:04:44.561 Library rpcrt4 found: NO 00:04:44.561 Library rt found: YES 00:04:44.561 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:44.561 Found CMake: /usr/bin/cmake (3.27.7) 00:04:44.561 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:44.561 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:44.561 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:44.561 Build targets in project: 32 00:04:44.561 00:04:44.561 xnvme 0.7.3 00:04:44.561 00:04:44.561 User defined options 00:04:44.561 with-libaio : enabled 00:04:44.561 with-liburing: enabled 00:04:44.561 with-libvfn : disabled 00:04:44.561 with-spdk : false 00:04:44.561 00:04:44.561 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:45.492 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:45.492 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:45.492 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:45.492 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:45.492 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:45.492 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:45.492 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:45.492 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:45.492 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:45.748 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:45.748 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:45.748 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:45.748 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:45.748 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:45.748 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:45.748 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:46.005 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:46.005 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:46.005 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:46.005 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:46.005 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:46.005 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:46.005 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:46.005 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:46.005 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:46.005 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:46.263 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:46.263 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:46.263 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:46.263 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:46.263 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:46.263 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:46.263 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:46.263 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:46.263 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:46.263 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:46.263 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:46.263 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:46.263 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:46.263 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:46.263 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:46.263 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:46.263 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:46.263 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:46.520 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:46.520 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:46.520 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:46.520 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:46.520 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:46.520 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:46.520 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:46.520 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:46.520 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:46.520 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:46.520 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:46.777 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:46.777 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:46.777 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:46.777 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:46.777 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:46.777 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:46.777 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:46.777 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:46.777 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:46.777 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:46.777 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:47.050 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:47.050 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:47.050 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:47.050 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:47.050 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:47.050 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:47.050 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:47.050 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:47.050 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:47.320 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:47.320 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:47.320 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:47.320 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:47.320 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:47.320 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:47.320 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:47.320 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:47.578 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:47.578 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:47.578 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:47.578 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:47.578 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:47.578 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:47.835 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:47.835 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:47.835 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:47.835 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:47.835 [93/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:47.835 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:47.835 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:47.835 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:47.835 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:47.836 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:47.836 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:47.836 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:47.836 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:47.836 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:47.836 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:47.836 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:47.836 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:47.836 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:47.836 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:48.093 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:48.093 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:48.093 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:48.093 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:48.093 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:48.093 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:48.093 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:48.093 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:48.093 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:48.093 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:48.093 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:48.093 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:48.093 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:48.093 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:48.093 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:48.350 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:48.350 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:48.350 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:48.350 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:48.350 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:48.350 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:48.350 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:48.350 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:48.350 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:48.350 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:48.608 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:48.608 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:48.608 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:48.608 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:48.608 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:48.608 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:48.608 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:48.865 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:48.865 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:48.865 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:48.865 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:49.123 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:49.123 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:49.123 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:49.123 [147/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:49.123 [148/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:49.123 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:49.123 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:49.380 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:49.380 [152/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:49.380 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:49.380 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:49.380 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:49.380 [156/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:49.380 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:49.638 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:49.638 [159/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:49.638 [160/203] Linking target lib/libxnvme.so 00:04:49.638 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:49.638 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:49.638 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:49.638 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:49.638 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:49.896 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:49.896 [167/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:49.896 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:49.896 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:49.896 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:50.154 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:50.154 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:50.154 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:50.154 [174/203] Linking static target lib/libxnvme.a 00:04:50.154 [175/203] Linking target tests/xnvme_tests_xnvme_file 00:04:50.412 [176/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:50.412 [177/203] Linking target tests/xnvme_tests_enum 00:04:50.412 [178/203] Linking target tests/xnvme_tests_buf 00:04:50.412 [179/203] Linking target tests/xnvme_tests_ioworker 00:04:50.412 [180/203] Linking target tests/xnvme_tests_async_intf 00:04:50.412 [181/203] Linking target tests/xnvme_tests_znd_state 00:04:50.412 [182/203] Linking target tests/xnvme_tests_lblk 00:04:50.412 [183/203] Linking target tests/xnvme_tests_scc 00:04:50.412 [184/203] Linking target tests/xnvme_tests_cli 00:04:50.412 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:50.412 [186/203] Linking target tests/xnvme_tests_znd_append 00:04:50.412 [187/203] Linking target tools/kvs 00:04:50.412 [188/203] Linking target tests/xnvme_tests_kvs 00:04:50.412 [189/203] Linking target tools/lblk 00:04:50.412 [190/203] Linking target tools/xnvme 00:04:50.412 [191/203] Linking target tests/xnvme_tests_map 00:04:50.413 [192/203] Linking target examples/xnvme_dev 00:04:50.413 [193/203] Linking target tools/xdd 00:04:50.413 [194/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:50.413 [195/203] Linking target examples/xnvme_enum 00:04:50.413 [196/203] Linking target tools/xnvme_file 00:04:50.413 [197/203] Linking target tools/zoned 00:04:50.413 [198/203] Linking target examples/zoned_io_async 00:04:50.413 [199/203] Linking target examples/xnvme_hello 00:04:50.413 [200/203] Linking target examples/xnvme_io_async 00:04:50.413 [201/203] Linking target examples/xnvme_single_sync 00:04:50.413 [202/203] Linking target examples/xnvme_single_async 00:04:50.413 [203/203] Linking target examples/zoned_io_sync 00:04:50.413 INFO: autodetecting backend as ninja 00:04:50.413 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:50.673 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:00.703 The Meson build system 00:05:00.703 Version: 1.3.1 00:05:00.703 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:00.703 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:00.703 Build type: native build 00:05:00.703 Program cat found: YES (/usr/bin/cat) 00:05:00.703 Project name: DPDK 00:05:00.703 Project version: 24.03.0 00:05:00.703 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:00.703 C linker for the host machine: cc ld.bfd 2.39-16 00:05:00.703 Host machine cpu family: x86_64 00:05:00.703 Host machine cpu: x86_64 00:05:00.703 Message: ## Building in Developer Mode ## 00:05:00.703 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:00.703 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:00.703 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:00.703 Program python3 found: YES (/usr/bin/python3) 00:05:00.703 Program cat found: YES (/usr/bin/cat) 00:05:00.703 Compiler for C supports arguments -march=native: YES 00:05:00.703 Checking for size of "void *" : 8 00:05:00.703 Checking for size of "void *" : 8 (cached) 00:05:00.703 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:05:00.703 Library m found: YES 00:05:00.703 Library numa found: YES 00:05:00.703 Has header "numaif.h" : YES 00:05:00.703 Library fdt found: NO 00:05:00.703 Library execinfo found: NO 00:05:00.703 Has header "execinfo.h" : YES 00:05:00.703 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:00.703 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:00.703 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:00.703 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:00.703 Run-time dependency openssl found: YES 3.0.9 00:05:00.703 Run-time dependency libpcap found: YES 1.10.4 00:05:00.703 Has header "pcap.h" with dependency libpcap: YES 00:05:00.703 Compiler for C supports arguments -Wcast-qual: YES 00:05:00.703 Compiler for C supports arguments -Wdeprecated: YES 00:05:00.703 Compiler for C supports arguments -Wformat: YES 00:05:00.703 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:00.703 Compiler for C supports arguments -Wformat-security: NO 00:05:00.703 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:00.703 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:00.703 Compiler for C supports arguments -Wnested-externs: YES 00:05:00.703 Compiler for C supports arguments -Wold-style-definition: YES 00:05:00.703 Compiler for C supports arguments -Wpointer-arith: YES 00:05:00.703 Compiler for C supports arguments -Wsign-compare: YES 00:05:00.703 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:00.703 Compiler for C supports arguments -Wundef: YES 00:05:00.703 Compiler for C supports arguments -Wwrite-strings: YES 00:05:00.703 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:00.703 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:00.703 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:00.703 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:00.703 Program objdump found: YES (/usr/bin/objdump) 00:05:00.703 Compiler for C supports arguments -mavx512f: YES 00:05:00.703 Checking if "AVX512 checking" compiles: YES 00:05:00.703 Fetching value of define "__SSE4_2__" : 1 00:05:00.703 Fetching value of define "__AES__" : 1 00:05:00.703 Fetching value of define "__AVX__" : 1 00:05:00.703 Fetching value of define "__AVX2__" : 1 00:05:00.703 Fetching value of define "__AVX512BW__" : (undefined) 00:05:00.703 Fetching value of define "__AVX512CD__" : (undefined) 00:05:00.703 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:00.703 Fetching value of define "__AVX512F__" : (undefined) 00:05:00.703 Fetching value of define "__AVX512VL__" : (undefined) 00:05:00.703 Fetching value of define "__PCLMUL__" : 1 00:05:00.703 Fetching value of define "__RDRND__" : 1 00:05:00.703 Fetching value of define "__RDSEED__" : 1 00:05:00.703 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:00.703 Fetching value of define "__znver1__" : (undefined) 00:05:00.703 Fetching value of define "__znver2__" : (undefined) 00:05:00.703 Fetching value of define "__znver3__" : (undefined) 00:05:00.703 Fetching value of define "__znver4__" : (undefined) 00:05:00.703 Library asan found: YES 00:05:00.703 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:00.703 Message: lib/log: Defining dependency "log" 00:05:00.703 Message: lib/kvargs: Defining dependency "kvargs" 00:05:00.703 Message: lib/telemetry: Defining dependency "telemetry" 00:05:00.703 Library rt found: YES 00:05:00.703 Checking for function "getentropy" : NO 00:05:00.704 Message: lib/eal: Defining dependency "eal" 00:05:00.704 Message: lib/ring: Defining dependency "ring" 00:05:00.704 Message: lib/rcu: Defining dependency "rcu" 00:05:00.704 Message: lib/mempool: Defining dependency "mempool" 00:05:00.704 Message: lib/mbuf: Defining dependency "mbuf" 00:05:00.704 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:00.704 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:00.704 Compiler for C supports arguments -mpclmul: YES 00:05:00.704 Compiler for C supports arguments -maes: YES 00:05:00.704 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:00.704 Compiler for C supports arguments -mavx512bw: YES 00:05:00.704 Compiler for C supports arguments -mavx512dq: YES 00:05:00.704 Compiler for C supports arguments -mavx512vl: YES 00:05:00.704 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:00.704 Compiler for C supports arguments -mavx2: YES 00:05:00.704 Compiler for C supports arguments -mavx: YES 00:05:00.704 Message: lib/net: Defining dependency "net" 00:05:00.704 Message: lib/meter: Defining dependency "meter" 00:05:00.704 Message: lib/ethdev: Defining dependency "ethdev" 00:05:00.704 Message: lib/pci: Defining dependency "pci" 00:05:00.704 Message: lib/cmdline: Defining dependency "cmdline" 00:05:00.704 Message: lib/hash: Defining dependency "hash" 00:05:00.704 Message: lib/timer: Defining dependency "timer" 00:05:00.704 Message: lib/compressdev: Defining dependency "compressdev" 00:05:00.704 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:00.704 Message: lib/dmadev: Defining dependency "dmadev" 00:05:00.704 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:00.704 Message: lib/power: Defining dependency "power" 00:05:00.704 Message: lib/reorder: Defining dependency "reorder" 00:05:00.704 Message: lib/security: Defining dependency "security" 00:05:00.704 Has header "linux/userfaultfd.h" : YES 00:05:00.704 Has header "linux/vduse.h" : YES 00:05:00.704 Message: lib/vhost: Defining dependency "vhost" 00:05:00.704 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:00.704 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:00.704 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:00.704 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:00.704 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:00.704 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:00.704 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:00.704 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:00.704 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:00.704 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:00.704 Program doxygen found: YES (/usr/bin/doxygen) 00:05:00.704 Configuring doxy-api-html.conf using configuration 00:05:00.704 Configuring doxy-api-man.conf using configuration 00:05:00.704 Program mandb found: YES (/usr/bin/mandb) 00:05:00.704 Program sphinx-build found: NO 00:05:00.704 Configuring rte_build_config.h using configuration 00:05:00.704 Message: 00:05:00.704 ================= 00:05:00.704 Applications Enabled 00:05:00.704 ================= 00:05:00.704 00:05:00.704 apps: 00:05:00.704 00:05:00.704 00:05:00.704 Message: 00:05:00.704 ================= 00:05:00.704 Libraries Enabled 00:05:00.704 ================= 00:05:00.704 00:05:00.704 libs: 00:05:00.704 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:00.704 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:00.704 cryptodev, dmadev, power, reorder, security, vhost, 00:05:00.704 00:05:00.704 Message: 00:05:00.704 =============== 00:05:00.704 Drivers Enabled 00:05:00.704 =============== 00:05:00.704 00:05:00.704 common: 00:05:00.704 00:05:00.704 bus: 00:05:00.704 pci, vdev, 00:05:00.704 mempool: 00:05:00.704 ring, 00:05:00.704 dma: 00:05:00.704 00:05:00.704 net: 00:05:00.704 00:05:00.704 crypto: 00:05:00.704 00:05:00.704 compress: 00:05:00.704 00:05:00.704 vdpa: 00:05:00.704 00:05:00.704 00:05:00.704 Message: 00:05:00.704 ================= 00:05:00.704 Content Skipped 00:05:00.704 ================= 00:05:00.704 00:05:00.704 apps: 00:05:00.704 dumpcap: explicitly disabled via build config 00:05:00.704 graph: explicitly disabled via build config 00:05:00.704 pdump: explicitly disabled via build config 00:05:00.704 proc-info: explicitly disabled via build config 00:05:00.704 test-acl: explicitly disabled via build config 00:05:00.704 test-bbdev: explicitly disabled via build config 00:05:00.704 test-cmdline: explicitly disabled via build config 00:05:00.704 test-compress-perf: explicitly disabled via build config 00:05:00.704 test-crypto-perf: explicitly disabled via build config 00:05:00.704 test-dma-perf: explicitly disabled via build config 00:05:00.704 test-eventdev: explicitly disabled via build config 00:05:00.704 test-fib: explicitly disabled via build config 00:05:00.704 test-flow-perf: explicitly disabled via build config 00:05:00.704 test-gpudev: explicitly disabled via build config 00:05:00.704 test-mldev: explicitly disabled via build config 00:05:00.704 test-pipeline: explicitly disabled via build config 00:05:00.704 test-pmd: explicitly disabled via build config 00:05:00.704 test-regex: explicitly disabled via build config 00:05:00.704 test-sad: explicitly disabled via build config 00:05:00.704 test-security-perf: explicitly disabled via build config 00:05:00.704 00:05:00.704 libs: 00:05:00.704 argparse: explicitly disabled via build config 00:05:00.704 metrics: explicitly disabled via build config 00:05:00.704 acl: explicitly disabled via build config 00:05:00.704 bbdev: explicitly disabled via build config 00:05:00.704 bitratestats: explicitly disabled via build config 00:05:00.704 bpf: explicitly disabled via build config 00:05:00.704 cfgfile: explicitly disabled via build config 00:05:00.704 distributor: explicitly disabled via build config 00:05:00.704 efd: explicitly disabled via build config 00:05:00.704 eventdev: explicitly disabled via build config 00:05:00.704 dispatcher: explicitly disabled via build config 00:05:00.704 gpudev: explicitly disabled via build config 00:05:00.704 gro: explicitly disabled via build config 00:05:00.704 gso: explicitly disabled via build config 00:05:00.704 ip_frag: explicitly disabled via build config 00:05:00.704 jobstats: explicitly disabled via build config 00:05:00.704 latencystats: explicitly disabled via build config 00:05:00.704 lpm: explicitly disabled via build config 00:05:00.704 member: explicitly disabled via build config 00:05:00.704 pcapng: explicitly disabled via build config 00:05:00.704 rawdev: explicitly disabled via build config 00:05:00.704 regexdev: explicitly disabled via build config 00:05:00.704 mldev: explicitly disabled via build config 00:05:00.704 rib: explicitly disabled via build config 00:05:00.704 sched: explicitly disabled via build config 00:05:00.704 stack: explicitly disabled via build config 00:05:00.704 ipsec: explicitly disabled via build config 00:05:00.704 pdcp: explicitly disabled via build config 00:05:00.704 fib: explicitly disabled via build config 00:05:00.704 port: explicitly disabled via build config 00:05:00.704 pdump: explicitly disabled via build config 00:05:00.704 table: explicitly disabled via build config 00:05:00.704 pipeline: explicitly disabled via build config 00:05:00.704 graph: explicitly disabled via build config 00:05:00.704 node: explicitly disabled via build config 00:05:00.704 00:05:00.704 drivers: 00:05:00.704 common/cpt: not in enabled drivers build config 00:05:00.704 common/dpaax: not in enabled drivers build config 00:05:00.704 common/iavf: not in enabled drivers build config 00:05:00.704 common/idpf: not in enabled drivers build config 00:05:00.704 common/ionic: not in enabled drivers build config 00:05:00.704 common/mvep: not in enabled drivers build config 00:05:00.704 common/octeontx: not in enabled drivers build config 00:05:00.704 bus/auxiliary: not in enabled drivers build config 00:05:00.704 bus/cdx: not in enabled drivers build config 00:05:00.704 bus/dpaa: not in enabled drivers build config 00:05:00.704 bus/fslmc: not in enabled drivers build config 00:05:00.704 bus/ifpga: not in enabled drivers build config 00:05:00.704 bus/platform: not in enabled drivers build config 00:05:00.704 bus/uacce: not in enabled drivers build config 00:05:00.704 bus/vmbus: not in enabled drivers build config 00:05:00.704 common/cnxk: not in enabled drivers build config 00:05:00.704 common/mlx5: not in enabled drivers build config 00:05:00.704 common/nfp: not in enabled drivers build config 00:05:00.704 common/nitrox: not in enabled drivers build config 00:05:00.704 common/qat: not in enabled drivers build config 00:05:00.704 common/sfc_efx: not in enabled drivers build config 00:05:00.704 mempool/bucket: not in enabled drivers build config 00:05:00.704 mempool/cnxk: not in enabled drivers build config 00:05:00.704 mempool/dpaa: not in enabled drivers build config 00:05:00.704 mempool/dpaa2: not in enabled drivers build config 00:05:00.704 mempool/octeontx: not in enabled drivers build config 00:05:00.704 mempool/stack: not in enabled drivers build config 00:05:00.704 dma/cnxk: not in enabled drivers build config 00:05:00.704 dma/dpaa: not in enabled drivers build config 00:05:00.704 dma/dpaa2: not in enabled drivers build config 00:05:00.704 dma/hisilicon: not in enabled drivers build config 00:05:00.704 dma/idxd: not in enabled drivers build config 00:05:00.704 dma/ioat: not in enabled drivers build config 00:05:00.704 dma/skeleton: not in enabled drivers build config 00:05:00.704 net/af_packet: not in enabled drivers build config 00:05:00.704 net/af_xdp: not in enabled drivers build config 00:05:00.704 net/ark: not in enabled drivers build config 00:05:00.704 net/atlantic: not in enabled drivers build config 00:05:00.704 net/avp: not in enabled drivers build config 00:05:00.704 net/axgbe: not in enabled drivers build config 00:05:00.704 net/bnx2x: not in enabled drivers build config 00:05:00.705 net/bnxt: not in enabled drivers build config 00:05:00.705 net/bonding: not in enabled drivers build config 00:05:00.705 net/cnxk: not in enabled drivers build config 00:05:00.705 net/cpfl: not in enabled drivers build config 00:05:00.705 net/cxgbe: not in enabled drivers build config 00:05:00.705 net/dpaa: not in enabled drivers build config 00:05:00.705 net/dpaa2: not in enabled drivers build config 00:05:00.705 net/e1000: not in enabled drivers build config 00:05:00.705 net/ena: not in enabled drivers build config 00:05:00.705 net/enetc: not in enabled drivers build config 00:05:00.705 net/enetfec: not in enabled drivers build config 00:05:00.705 net/enic: not in enabled drivers build config 00:05:00.705 net/failsafe: not in enabled drivers build config 00:05:00.705 net/fm10k: not in enabled drivers build config 00:05:00.705 net/gve: not in enabled drivers build config 00:05:00.705 net/hinic: not in enabled drivers build config 00:05:00.705 net/hns3: not in enabled drivers build config 00:05:00.705 net/i40e: not in enabled drivers build config 00:05:00.705 net/iavf: not in enabled drivers build config 00:05:00.705 net/ice: not in enabled drivers build config 00:05:00.705 net/idpf: not in enabled drivers build config 00:05:00.705 net/igc: not in enabled drivers build config 00:05:00.705 net/ionic: not in enabled drivers build config 00:05:00.705 net/ipn3ke: not in enabled drivers build config 00:05:00.705 net/ixgbe: not in enabled drivers build config 00:05:00.705 net/mana: not in enabled drivers build config 00:05:00.705 net/memif: not in enabled drivers build config 00:05:00.705 net/mlx4: not in enabled drivers build config 00:05:00.705 net/mlx5: not in enabled drivers build config 00:05:00.705 net/mvneta: not in enabled drivers build config 00:05:00.705 net/mvpp2: not in enabled drivers build config 00:05:00.705 net/netvsc: not in enabled drivers build config 00:05:00.705 net/nfb: not in enabled drivers build config 00:05:00.705 net/nfp: not in enabled drivers build config 00:05:00.705 net/ngbe: not in enabled drivers build config 00:05:00.705 net/null: not in enabled drivers build config 00:05:00.705 net/octeontx: not in enabled drivers build config 00:05:00.705 net/octeon_ep: not in enabled drivers build config 00:05:00.705 net/pcap: not in enabled drivers build config 00:05:00.705 net/pfe: not in enabled drivers build config 00:05:00.705 net/qede: not in enabled drivers build config 00:05:00.705 net/ring: not in enabled drivers build config 00:05:00.705 net/sfc: not in enabled drivers build config 00:05:00.705 net/softnic: not in enabled drivers build config 00:05:00.705 net/tap: not in enabled drivers build config 00:05:00.705 net/thunderx: not in enabled drivers build config 00:05:00.705 net/txgbe: not in enabled drivers build config 00:05:00.705 net/vdev_netvsc: not in enabled drivers build config 00:05:00.705 net/vhost: not in enabled drivers build config 00:05:00.705 net/virtio: not in enabled drivers build config 00:05:00.705 net/vmxnet3: not in enabled drivers build config 00:05:00.705 raw/*: missing internal dependency, "rawdev" 00:05:00.705 crypto/armv8: not in enabled drivers build config 00:05:00.705 crypto/bcmfs: not in enabled drivers build config 00:05:00.705 crypto/caam_jr: not in enabled drivers build config 00:05:00.705 crypto/ccp: not in enabled drivers build config 00:05:00.705 crypto/cnxk: not in enabled drivers build config 00:05:00.705 crypto/dpaa_sec: not in enabled drivers build config 00:05:00.705 crypto/dpaa2_sec: not in enabled drivers build config 00:05:00.705 crypto/ipsec_mb: not in enabled drivers build config 00:05:00.705 crypto/mlx5: not in enabled drivers build config 00:05:00.705 crypto/mvsam: not in enabled drivers build config 00:05:00.705 crypto/nitrox: not in enabled drivers build config 00:05:00.705 crypto/null: not in enabled drivers build config 00:05:00.705 crypto/octeontx: not in enabled drivers build config 00:05:00.705 crypto/openssl: not in enabled drivers build config 00:05:00.705 crypto/scheduler: not in enabled drivers build config 00:05:00.705 crypto/uadk: not in enabled drivers build config 00:05:00.705 crypto/virtio: not in enabled drivers build config 00:05:00.705 compress/isal: not in enabled drivers build config 00:05:00.705 compress/mlx5: not in enabled drivers build config 00:05:00.705 compress/nitrox: not in enabled drivers build config 00:05:00.705 compress/octeontx: not in enabled drivers build config 00:05:00.705 compress/zlib: not in enabled drivers build config 00:05:00.705 regex/*: missing internal dependency, "regexdev" 00:05:00.705 ml/*: missing internal dependency, "mldev" 00:05:00.705 vdpa/ifc: not in enabled drivers build config 00:05:00.705 vdpa/mlx5: not in enabled drivers build config 00:05:00.705 vdpa/nfp: not in enabled drivers build config 00:05:00.705 vdpa/sfc: not in enabled drivers build config 00:05:00.705 event/*: missing internal dependency, "eventdev" 00:05:00.705 baseband/*: missing internal dependency, "bbdev" 00:05:00.705 gpu/*: missing internal dependency, "gpudev" 00:05:00.705 00:05:00.705 00:05:00.963 Build targets in project: 85 00:05:00.963 00:05:00.963 DPDK 24.03.0 00:05:00.963 00:05:00.963 User defined options 00:05:00.963 buildtype : debug 00:05:00.963 default_library : shared 00:05:00.963 libdir : lib 00:05:00.963 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:00.963 b_sanitize : address 00:05:00.963 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:00.963 c_link_args : 00:05:00.963 cpu_instruction_set: native 00:05:00.963 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:00.963 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:00.963 enable_docs : false 00:05:00.963 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:00.963 enable_kmods : false 00:05:00.963 max_lcores : 128 00:05:00.963 tests : false 00:05:00.963 00:05:00.963 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:01.896 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:01.896 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:01.896 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:01.896 [3/268] Linking static target lib/librte_kvargs.a 00:05:02.154 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:02.154 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:02.154 [6/268] Linking static target lib/librte_log.a 00:05:02.721 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.721 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:02.721 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:02.992 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:02.992 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:02.992 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:02.992 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:03.250 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:03.250 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:03.508 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.508 [17/268] Linking target lib/librte_log.so.24.1 00:05:03.766 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:03.766 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:04.025 [20/268] Linking static target lib/librte_telemetry.a 00:05:04.025 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:04.025 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:04.025 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:04.284 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:04.284 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:04.284 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:04.284 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:04.542 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:04.799 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:04.799 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:05.057 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:05.057 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:05.057 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.314 [34/268] Linking target lib/librte_telemetry.so.24.1 00:05:05.573 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:05.573 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:05.573 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:05.573 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:05.831 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:05.831 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:05.831 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:05.831 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:05.831 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:05.831 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:05.831 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:06.421 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:06.678 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:06.678 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:06.935 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:06.935 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:06.935 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:06.935 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:07.192 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:07.449 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:07.449 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:08.014 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:08.014 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:08.272 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:08.272 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:08.272 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:08.272 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:08.530 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:08.530 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:08.530 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:09.093 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:09.093 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:09.094 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:09.094 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:09.351 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:09.610 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:09.610 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:09.610 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:09.610 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:09.610 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:09.610 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:09.610 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:09.867 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:09.867 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:09.867 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:10.125 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:10.382 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:10.640 [82/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:10.640 [83/268] Linking static target lib/librte_rcu.a 00:05:10.640 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:10.640 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:10.640 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:10.640 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:10.640 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:10.898 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:10.898 [90/268] Linking static target lib/librte_mempool.a 00:05:10.898 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:10.898 [92/268] Linking static target lib/librte_ring.a 00:05:10.898 [93/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:10.898 [94/268] Linking static target lib/librte_eal.a 00:05:11.156 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.156 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:11.156 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:11.156 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:11.413 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:11.413 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:11.413 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:11.977 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:11.977 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:11.977 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:11.977 [105/268] Linking static target lib/librte_meter.a 00:05:12.235 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:12.235 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:12.235 [108/268] Linking static target lib/librte_mbuf.a 00:05:12.235 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:12.235 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.493 [111/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:12.493 [112/268] Linking static target lib/librte_net.a 00:05:12.493 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.493 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:12.752 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:13.010 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:13.010 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.268 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:13.268 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:13.526 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.784 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:14.350 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:14.350 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:14.350 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:14.350 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:14.350 [126/268] Linking static target lib/librte_pci.a 00:05:14.609 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:14.609 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:14.609 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:14.609 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:14.867 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:14.867 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:14.867 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:14.867 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:14.867 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:15.125 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:15.125 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.125 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:15.125 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:15.125 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:15.125 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:15.383 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:15.383 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:15.383 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:15.383 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:15.383 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:15.383 [147/268] Linking static target lib/librte_cmdline.a 00:05:15.949 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:15.949 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:16.207 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:16.207 [151/268] Linking static target lib/librte_timer.a 00:05:16.207 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:16.465 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:16.762 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:16.763 [155/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.763 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:17.019 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:17.019 [158/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:17.019 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:17.019 [160/268] Linking static target lib/librte_ethdev.a 00:05:17.277 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:17.277 [162/268] Linking static target lib/librte_compressdev.a 00:05:17.277 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:17.535 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.535 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:17.535 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:17.792 [167/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:17.792 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:17.792 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:18.051 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:18.051 [171/268] Linking static target lib/librte_hash.a 00:05:18.051 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:18.051 [173/268] Linking static target lib/librte_dmadev.a 00:05:18.308 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.308 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:18.308 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:18.308 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:18.308 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:18.308 [179/268] Linking static target lib/librte_cryptodev.a 00:05:18.566 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:18.566 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:18.824 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:18.824 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.082 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:19.082 [185/268] Linking static target lib/librte_power.a 00:05:19.340 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.340 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:19.598 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:19.598 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:19.598 [190/268] Linking static target lib/librte_reorder.a 00:05:19.598 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:19.598 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:19.598 [193/268] Linking static target lib/librte_security.a 00:05:20.163 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.163 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.422 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.422 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:20.422 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:20.680 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:20.680 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.938 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:20.938 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:20.938 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:21.195 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:21.452 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:21.452 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:21.710 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:21.710 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:21.710 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:21.710 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:21.710 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:21.968 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:21.968 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:21.968 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:21.968 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:21.968 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:22.226 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.226 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:22.226 [219/268] Linking static target drivers/librte_bus_vdev.a 00:05:22.226 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:22.226 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:22.484 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.484 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:22.484 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.484 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.484 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:22.742 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.307 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.307 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:23.307 [230/268] Linking target lib/librte_eal.so.24.1 00:05:23.565 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:23.565 [232/268] Linking target lib/librte_ring.so.24.1 00:05:23.565 [233/268] Linking target lib/librte_timer.so.24.1 00:05:23.565 [234/268] Linking target lib/librte_meter.so.24.1 00:05:23.565 [235/268] Linking target lib/librte_pci.so.24.1 00:05:23.565 [236/268] Linking target lib/librte_dmadev.so.24.1 00:05:23.565 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:23.823 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:23.823 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:23.823 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:23.823 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:23.823 [242/268] Linking target lib/librte_mempool.so.24.1 00:05:23.823 [243/268] Linking target lib/librte_rcu.so.24.1 00:05:23.823 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:23.823 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:23.823 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:24.081 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:24.081 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:24.081 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:24.081 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:24.466 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:24.466 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:24.466 [253/268] Linking target lib/librte_net.so.24.1 00:05:24.466 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:24.466 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:24.466 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:24.466 [257/268] Linking target lib/librte_cmdline.so.24.1 00:05:24.466 [258/268] Linking target lib/librte_hash.so.24.1 00:05:24.466 [259/268] Linking target lib/librte_security.so.24.1 00:05:24.466 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:25.399 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.399 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:25.657 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:25.657 [264/268] Linking target lib/librte_power.so.24.1 00:05:29.840 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:29.840 [266/268] Linking static target lib/librte_vhost.a 00:05:31.738 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.738 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:31.738 INFO: autodetecting backend as ninja 00:05:31.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:33.109 CC lib/log/log.o 00:05:33.109 CC lib/log/log_deprecated.o 00:05:33.109 CC lib/log/log_flags.o 00:05:33.109 CC lib/ut_mock/mock.o 00:05:33.109 CC lib/ut/ut.o 00:05:33.366 LIB libspdk_ut.a 00:05:33.366 LIB libspdk_ut_mock.a 00:05:33.366 SO libspdk_ut.so.2.0 00:05:33.366 SO libspdk_ut_mock.so.6.0 00:05:33.366 LIB libspdk_log.a 00:05:33.366 SO libspdk_log.so.7.0 00:05:33.624 SYMLINK libspdk_ut_mock.so 00:05:33.624 SYMLINK libspdk_ut.so 00:05:33.624 SYMLINK libspdk_log.so 00:05:33.882 CC lib/util/base64.o 00:05:33.882 CC lib/util/bit_array.o 00:05:33.882 CC lib/util/crc16.o 00:05:33.882 CC lib/util/cpuset.o 00:05:33.882 CC lib/util/crc32c.o 00:05:33.882 CC lib/util/crc32.o 00:05:33.882 CXX lib/trace_parser/trace.o 00:05:33.882 CC lib/ioat/ioat.o 00:05:33.882 CC lib/dma/dma.o 00:05:33.882 CC lib/vfio_user/host/vfio_user_pci.o 00:05:33.882 CC lib/util/crc32_ieee.o 00:05:33.882 CC lib/util/crc64.o 00:05:33.882 CC lib/util/dif.o 00:05:34.140 CC lib/util/fd.o 00:05:34.140 CC lib/util/fd_group.o 00:05:34.140 CC lib/util/file.o 00:05:34.140 CC lib/vfio_user/host/vfio_user.o 00:05:34.140 CC lib/util/hexlify.o 00:05:34.140 LIB libspdk_dma.a 00:05:34.397 SO libspdk_dma.so.4.0 00:05:34.397 CC lib/util/iov.o 00:05:34.397 CC lib/util/math.o 00:05:34.397 LIB libspdk_ioat.a 00:05:34.397 SO libspdk_ioat.so.7.0 00:05:34.397 CC lib/util/net.o 00:05:34.397 SYMLINK libspdk_dma.so 00:05:34.397 CC lib/util/pipe.o 00:05:34.397 SYMLINK libspdk_ioat.so 00:05:34.397 CC lib/util/strerror_tls.o 00:05:34.397 CC lib/util/string.o 00:05:34.397 LIB libspdk_vfio_user.a 00:05:34.397 CC lib/util/uuid.o 00:05:34.397 CC lib/util/xor.o 00:05:34.397 SO libspdk_vfio_user.so.5.0 00:05:34.655 CC lib/util/zipf.o 00:05:34.655 SYMLINK libspdk_vfio_user.so 00:05:34.912 LIB libspdk_util.a 00:05:34.912 SO libspdk_util.so.10.0 00:05:35.170 SYMLINK libspdk_util.so 00:05:35.170 LIB libspdk_trace_parser.a 00:05:35.427 SO libspdk_trace_parser.so.5.0 00:05:35.427 SYMLINK libspdk_trace_parser.so 00:05:35.427 CC lib/vmd/vmd.o 00:05:35.427 CC lib/conf/conf.o 00:05:35.427 CC lib/vmd/led.o 00:05:35.427 CC lib/env_dpdk/env.o 00:05:35.427 CC lib/env_dpdk/pci.o 00:05:35.427 CC lib/env_dpdk/memory.o 00:05:35.427 CC lib/idxd/idxd.o 00:05:35.427 CC lib/rdma_utils/rdma_utils.o 00:05:35.427 CC lib/json/json_parse.o 00:05:35.427 CC lib/rdma_provider/common.o 00:05:35.685 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:35.685 CC lib/json/json_util.o 00:05:35.685 LIB libspdk_conf.a 00:05:35.685 SO libspdk_conf.so.6.0 00:05:35.942 CC lib/env_dpdk/init.o 00:05:35.942 SYMLINK libspdk_conf.so 00:05:35.942 CC lib/env_dpdk/threads.o 00:05:35.942 LIB libspdk_rdma_provider.a 00:05:35.942 SO libspdk_rdma_provider.so.6.0 00:05:35.942 CC lib/json/json_write.o 00:05:35.942 LIB libspdk_rdma_utils.a 00:05:35.942 SYMLINK libspdk_rdma_provider.so 00:05:35.942 CC lib/env_dpdk/pci_ioat.o 00:05:35.942 CC lib/env_dpdk/pci_virtio.o 00:05:36.200 SO libspdk_rdma_utils.so.1.0 00:05:36.200 SYMLINK libspdk_rdma_utils.so 00:05:36.200 CC lib/env_dpdk/pci_vmd.o 00:05:36.200 CC lib/idxd/idxd_user.o 00:05:36.200 CC lib/idxd/idxd_kernel.o 00:05:36.200 CC lib/env_dpdk/pci_idxd.o 00:05:36.200 CC lib/env_dpdk/pci_event.o 00:05:36.457 CC lib/env_dpdk/sigbus_handler.o 00:05:36.457 CC lib/env_dpdk/pci_dpdk.o 00:05:36.457 LIB libspdk_json.a 00:05:36.457 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:36.458 SO libspdk_json.so.6.0 00:05:36.458 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:36.458 SYMLINK libspdk_json.so 00:05:36.725 LIB libspdk_vmd.a 00:05:36.725 SO libspdk_vmd.so.6.0 00:05:36.725 LIB libspdk_idxd.a 00:05:36.725 CC lib/jsonrpc/jsonrpc_server.o 00:05:36.725 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:36.725 CC lib/jsonrpc/jsonrpc_client.o 00:05:36.725 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:36.984 SO libspdk_idxd.so.12.0 00:05:36.984 SYMLINK libspdk_vmd.so 00:05:36.984 SYMLINK libspdk_idxd.so 00:05:37.242 LIB libspdk_jsonrpc.a 00:05:37.499 SO libspdk_jsonrpc.so.6.0 00:05:37.499 SYMLINK libspdk_jsonrpc.so 00:05:37.757 CC lib/rpc/rpc.o 00:05:38.014 LIB libspdk_rpc.a 00:05:38.014 SO libspdk_rpc.so.6.0 00:05:38.299 SYMLINK libspdk_rpc.so 00:05:38.299 LIB libspdk_env_dpdk.a 00:05:38.299 SO libspdk_env_dpdk.so.15.0 00:05:38.561 CC lib/keyring/keyring.o 00:05:38.561 CC lib/trace/trace.o 00:05:38.561 CC lib/notify/notify.o 00:05:38.561 CC lib/trace/trace_flags.o 00:05:38.561 CC lib/keyring/keyring_rpc.o 00:05:38.561 CC lib/notify/notify_rpc.o 00:05:38.561 CC lib/trace/trace_rpc.o 00:05:38.561 SYMLINK libspdk_env_dpdk.so 00:05:38.561 LIB libspdk_notify.a 00:05:38.818 SO libspdk_notify.so.6.0 00:05:38.818 LIB libspdk_keyring.a 00:05:38.818 SYMLINK libspdk_notify.so 00:05:38.818 LIB libspdk_trace.a 00:05:38.818 SO libspdk_keyring.so.1.0 00:05:38.818 SO libspdk_trace.so.10.0 00:05:38.818 SYMLINK libspdk_keyring.so 00:05:38.818 SYMLINK libspdk_trace.so 00:05:39.383 CC lib/sock/sock.o 00:05:39.383 CC lib/sock/sock_rpc.o 00:05:39.383 CC lib/thread/thread.o 00:05:39.383 CC lib/thread/iobuf.o 00:05:39.948 LIB libspdk_sock.a 00:05:39.948 SO libspdk_sock.so.10.0 00:05:39.948 SYMLINK libspdk_sock.so 00:05:40.205 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:40.205 CC lib/nvme/nvme_ctrlr.o 00:05:40.205 CC lib/nvme/nvme_fabric.o 00:05:40.205 CC lib/nvme/nvme_ns_cmd.o 00:05:40.205 CC lib/nvme/nvme_pcie_common.o 00:05:40.205 CC lib/nvme/nvme_ns.o 00:05:40.205 CC lib/nvme/nvme_pcie.o 00:05:40.205 CC lib/nvme/nvme.o 00:05:40.205 CC lib/nvme/nvme_qpair.o 00:05:41.138 CC lib/nvme/nvme_quirks.o 00:05:41.138 CC lib/nvme/nvme_transport.o 00:05:41.138 CC lib/nvme/nvme_discovery.o 00:05:41.138 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:41.396 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:41.396 CC lib/nvme/nvme_tcp.o 00:05:41.654 LIB libspdk_thread.a 00:05:41.654 SO libspdk_thread.so.10.1 00:05:41.912 SYMLINK libspdk_thread.so 00:05:41.912 CC lib/nvme/nvme_opal.o 00:05:41.912 CC lib/nvme/nvme_io_msg.o 00:05:41.912 CC lib/nvme/nvme_poll_group.o 00:05:42.170 CC lib/accel/accel.o 00:05:42.170 CC lib/blob/blobstore.o 00:05:42.170 CC lib/nvme/nvme_zns.o 00:05:42.170 CC lib/nvme/nvme_stubs.o 00:05:42.170 CC lib/init/json_config.o 00:05:42.428 CC lib/init/subsystem.o 00:05:42.686 CC lib/nvme/nvme_auth.o 00:05:42.686 CC lib/nvme/nvme_cuse.o 00:05:42.686 CC lib/init/subsystem_rpc.o 00:05:42.686 CC lib/init/rpc.o 00:05:42.944 CC lib/blob/request.o 00:05:42.944 CC lib/blob/zeroes.o 00:05:42.944 LIB libspdk_init.a 00:05:42.944 SO libspdk_init.so.5.0 00:05:42.944 CC lib/virtio/virtio.o 00:05:43.203 SYMLINK libspdk_init.so 00:05:43.203 CC lib/blob/blob_bs_dev.o 00:05:43.203 CC lib/virtio/virtio_vhost_user.o 00:05:43.203 CC lib/event/app.o 00:05:43.464 CC lib/event/reactor.o 00:05:43.464 CC lib/event/log_rpc.o 00:05:43.464 CC lib/nvme/nvme_rdma.o 00:05:43.464 CC lib/accel/accel_rpc.o 00:05:43.721 CC lib/virtio/virtio_vfio_user.o 00:05:43.721 CC lib/event/app_rpc.o 00:05:43.721 CC lib/virtio/virtio_pci.o 00:05:43.721 CC lib/event/scheduler_static.o 00:05:43.721 CC lib/accel/accel_sw.o 00:05:43.979 LIB libspdk_event.a 00:05:43.979 LIB libspdk_virtio.a 00:05:44.237 SO libspdk_event.so.14.0 00:05:44.237 SO libspdk_virtio.so.7.0 00:05:44.237 LIB libspdk_accel.a 00:05:44.237 SYMLINK libspdk_event.so 00:05:44.237 SYMLINK libspdk_virtio.so 00:05:44.237 SO libspdk_accel.so.16.0 00:05:44.494 SYMLINK libspdk_accel.so 00:05:44.752 CC lib/bdev/bdev.o 00:05:44.752 CC lib/bdev/bdev_rpc.o 00:05:44.752 CC lib/bdev/bdev_zone.o 00:05:44.752 CC lib/bdev/part.o 00:05:44.752 CC lib/bdev/scsi_nvme.o 00:05:45.318 LIB libspdk_nvme.a 00:05:45.576 SO libspdk_nvme.so.13.1 00:05:46.142 SYMLINK libspdk_nvme.so 00:05:47.074 LIB libspdk_blob.a 00:05:47.332 SO libspdk_blob.so.11.0 00:05:47.589 SYMLINK libspdk_blob.so 00:05:47.589 CC lib/blobfs/blobfs.o 00:05:47.847 CC lib/blobfs/tree.o 00:05:47.847 CC lib/lvol/lvol.o 00:05:48.781 LIB libspdk_bdev.a 00:05:48.781 SO libspdk_bdev.so.16.0 00:05:48.781 SYMLINK libspdk_bdev.so 00:05:48.781 LIB libspdk_blobfs.a 00:05:49.075 SO libspdk_blobfs.so.10.0 00:05:49.075 SYMLINK libspdk_blobfs.so 00:05:49.075 CC lib/nvmf/ctrlr.o 00:05:49.075 CC lib/nvmf/ctrlr_discovery.o 00:05:49.075 CC lib/nvmf/ctrlr_bdev.o 00:05:49.075 CC lib/nvmf/nvmf.o 00:05:49.075 CC lib/nvmf/subsystem.o 00:05:49.075 CC lib/nbd/nbd.o 00:05:49.075 CC lib/ublk/ublk.o 00:05:49.075 CC lib/ftl/ftl_core.o 00:05:49.075 CC lib/scsi/dev.o 00:05:49.075 LIB libspdk_lvol.a 00:05:49.075 SO libspdk_lvol.so.10.0 00:05:49.333 CC lib/scsi/lun.o 00:05:49.333 SYMLINK libspdk_lvol.so 00:05:49.333 CC lib/scsi/port.o 00:05:49.592 CC lib/nbd/nbd_rpc.o 00:05:49.592 CC lib/ublk/ublk_rpc.o 00:05:49.592 CC lib/ftl/ftl_init.o 00:05:49.592 CC lib/scsi/scsi.o 00:05:49.850 CC lib/nvmf/nvmf_rpc.o 00:05:49.850 CC lib/ftl/ftl_layout.o 00:05:49.850 LIB libspdk_ublk.a 00:05:49.850 LIB libspdk_nbd.a 00:05:49.850 CC lib/scsi/scsi_bdev.o 00:05:49.850 SO libspdk_ublk.so.3.0 00:05:49.850 SO libspdk_nbd.so.7.0 00:05:49.850 CC lib/nvmf/transport.o 00:05:50.109 SYMLINK libspdk_nbd.so 00:05:50.109 CC lib/scsi/scsi_pr.o 00:05:50.109 SYMLINK libspdk_ublk.so 00:05:50.109 CC lib/scsi/scsi_rpc.o 00:05:50.109 CC lib/nvmf/tcp.o 00:05:50.109 CC lib/scsi/task.o 00:05:50.109 CC lib/nvmf/stubs.o 00:05:50.367 CC lib/ftl/ftl_debug.o 00:05:50.367 CC lib/ftl/ftl_io.o 00:05:50.367 CC lib/ftl/ftl_sb.o 00:05:50.625 CC lib/ftl/ftl_l2p.o 00:05:50.625 LIB libspdk_scsi.a 00:05:50.625 SO libspdk_scsi.so.9.0 00:05:50.625 CC lib/nvmf/mdns_server.o 00:05:50.625 CC lib/nvmf/rdma.o 00:05:50.625 CC lib/nvmf/auth.o 00:05:50.625 SYMLINK libspdk_scsi.so 00:05:50.884 CC lib/ftl/ftl_l2p_flat.o 00:05:50.884 CC lib/ftl/ftl_nv_cache.o 00:05:50.884 CC lib/ftl/ftl_band.o 00:05:50.884 CC lib/iscsi/conn.o 00:05:50.884 CC lib/vhost/vhost.o 00:05:51.142 CC lib/iscsi/init_grp.o 00:05:51.142 CC lib/ftl/ftl_band_ops.o 00:05:51.400 CC lib/ftl/ftl_writer.o 00:05:51.400 CC lib/ftl/ftl_rq.o 00:05:51.658 CC lib/ftl/ftl_reloc.o 00:05:51.658 CC lib/ftl/ftl_l2p_cache.o 00:05:51.658 CC lib/ftl/ftl_p2l.o 00:05:51.658 CC lib/ftl/mngt/ftl_mngt.o 00:05:51.658 CC lib/vhost/vhost_rpc.o 00:05:51.658 CC lib/iscsi/iscsi.o 00:05:51.916 CC lib/iscsi/md5.o 00:05:52.173 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:52.173 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:52.173 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:52.173 CC lib/iscsi/param.o 00:05:52.173 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:52.431 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:52.431 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:52.431 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:52.431 CC lib/iscsi/portal_grp.o 00:05:52.431 CC lib/vhost/vhost_scsi.o 00:05:52.689 CC lib/iscsi/tgt_node.o 00:05:52.689 CC lib/iscsi/iscsi_subsystem.o 00:05:52.689 CC lib/iscsi/iscsi_rpc.o 00:05:52.689 CC lib/vhost/vhost_blk.o 00:05:52.689 CC lib/iscsi/task.o 00:05:52.689 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:52.689 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:52.948 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:52.948 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:53.207 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:53.207 CC lib/vhost/rte_vhost_user.o 00:05:53.207 CC lib/ftl/utils/ftl_conf.o 00:05:53.207 CC lib/ftl/utils/ftl_md.o 00:05:53.207 CC lib/ftl/utils/ftl_mempool.o 00:05:53.465 CC lib/ftl/utils/ftl_bitmap.o 00:05:53.465 CC lib/ftl/utils/ftl_property.o 00:05:53.465 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:53.465 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:53.724 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:53.724 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:53.724 LIB libspdk_nvmf.a 00:05:53.724 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:53.982 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:53.982 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:53.982 LIB libspdk_iscsi.a 00:05:53.982 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:53.982 SO libspdk_nvmf.so.19.0 00:05:53.982 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:53.982 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:53.982 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:53.982 SO libspdk_iscsi.so.8.0 00:05:54.241 CC lib/ftl/base/ftl_base_dev.o 00:05:54.241 CC lib/ftl/base/ftl_base_bdev.o 00:05:54.241 CC lib/ftl/ftl_trace.o 00:05:54.241 SYMLINK libspdk_nvmf.so 00:05:54.500 SYMLINK libspdk_iscsi.so 00:05:54.500 LIB libspdk_ftl.a 00:05:54.759 SO libspdk_ftl.so.9.0 00:05:54.759 LIB libspdk_vhost.a 00:05:55.017 SO libspdk_vhost.so.8.0 00:05:55.276 SYMLINK libspdk_vhost.so 00:05:55.276 SYMLINK libspdk_ftl.so 00:05:55.843 CC module/env_dpdk/env_dpdk_rpc.o 00:05:55.843 CC module/accel/dsa/accel_dsa.o 00:05:55.843 CC module/accel/ioat/accel_ioat.o 00:05:55.843 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:55.843 CC module/accel/error/accel_error.o 00:05:55.843 CC module/accel/iaa/accel_iaa.o 00:05:55.843 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:55.843 CC module/sock/posix/posix.o 00:05:55.843 CC module/keyring/file/keyring.o 00:05:55.843 CC module/blob/bdev/blob_bdev.o 00:05:55.843 LIB libspdk_env_dpdk_rpc.a 00:05:55.843 SO libspdk_env_dpdk_rpc.so.6.0 00:05:56.101 LIB libspdk_scheduler_dpdk_governor.a 00:05:56.101 SYMLINK libspdk_env_dpdk_rpc.so 00:05:56.101 CC module/accel/iaa/accel_iaa_rpc.o 00:05:56.101 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:56.101 CC module/accel/ioat/accel_ioat_rpc.o 00:05:56.101 CC module/keyring/file/keyring_rpc.o 00:05:56.101 CC module/accel/dsa/accel_dsa_rpc.o 00:05:56.101 LIB libspdk_scheduler_dynamic.a 00:05:56.101 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:56.101 CC module/accel/error/accel_error_rpc.o 00:05:56.101 SO libspdk_scheduler_dynamic.so.4.0 00:05:56.359 LIB libspdk_accel_iaa.a 00:05:56.359 SYMLINK libspdk_scheduler_dynamic.so 00:05:56.359 LIB libspdk_keyring_file.a 00:05:56.359 LIB libspdk_accel_dsa.a 00:05:56.359 SO libspdk_accel_iaa.so.3.0 00:05:56.359 LIB libspdk_accel_ioat.a 00:05:56.359 SO libspdk_accel_dsa.so.5.0 00:05:56.359 SO libspdk_keyring_file.so.1.0 00:05:56.359 SO libspdk_accel_ioat.so.6.0 00:05:56.359 LIB libspdk_accel_error.a 00:05:56.359 CC module/keyring/linux/keyring.o 00:05:56.359 SYMLINK libspdk_accel_iaa.so 00:05:56.359 CC module/keyring/linux/keyring_rpc.o 00:05:56.359 SYMLINK libspdk_keyring_file.so 00:05:56.359 SYMLINK libspdk_accel_dsa.so 00:05:56.359 LIB libspdk_blob_bdev.a 00:05:56.359 SO libspdk_accel_error.so.2.0 00:05:56.359 SYMLINK libspdk_accel_ioat.so 00:05:56.359 CC module/scheduler/gscheduler/gscheduler.o 00:05:56.359 SO libspdk_blob_bdev.so.11.0 00:05:56.616 SYMLINK libspdk_accel_error.so 00:05:56.616 SYMLINK libspdk_blob_bdev.so 00:05:56.616 LIB libspdk_keyring_linux.a 00:05:56.616 SO libspdk_keyring_linux.so.1.0 00:05:56.616 LIB libspdk_scheduler_gscheduler.a 00:05:56.616 SO libspdk_scheduler_gscheduler.so.4.0 00:05:56.616 SYMLINK libspdk_keyring_linux.so 00:05:56.874 SYMLINK libspdk_scheduler_gscheduler.so 00:05:56.874 CC module/blobfs/bdev/blobfs_bdev.o 00:05:56.874 CC module/bdev/error/vbdev_error.o 00:05:56.874 CC module/bdev/malloc/bdev_malloc.o 00:05:56.874 CC module/bdev/delay/vbdev_delay.o 00:05:56.874 CC module/bdev/gpt/gpt.o 00:05:56.874 CC module/bdev/lvol/vbdev_lvol.o 00:05:56.874 CC module/bdev/null/bdev_null.o 00:05:56.874 CC module/bdev/nvme/bdev_nvme.o 00:05:57.132 LIB libspdk_sock_posix.a 00:05:57.132 CC module/bdev/passthru/vbdev_passthru.o 00:05:57.132 SO libspdk_sock_posix.so.6.0 00:05:57.132 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:57.132 CC module/bdev/gpt/vbdev_gpt.o 00:05:57.132 SYMLINK libspdk_sock_posix.so 00:05:57.132 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:57.132 CC module/bdev/error/vbdev_error_rpc.o 00:05:57.389 CC module/bdev/null/bdev_null_rpc.o 00:05:57.389 LIB libspdk_blobfs_bdev.a 00:05:57.389 SO libspdk_blobfs_bdev.so.6.0 00:05:57.389 LIB libspdk_bdev_error.a 00:05:57.389 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:57.389 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:57.389 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:57.389 SO libspdk_bdev_error.so.6.0 00:05:57.389 LIB libspdk_bdev_gpt.a 00:05:57.646 SYMLINK libspdk_blobfs_bdev.so 00:05:57.646 CC module/bdev/nvme/nvme_rpc.o 00:05:57.646 SO libspdk_bdev_gpt.so.6.0 00:05:57.646 LIB libspdk_bdev_null.a 00:05:57.646 SYMLINK libspdk_bdev_error.so 00:05:57.646 SO libspdk_bdev_null.so.6.0 00:05:57.646 LIB libspdk_bdev_passthru.a 00:05:57.646 LIB libspdk_bdev_malloc.a 00:05:57.646 SYMLINK libspdk_bdev_gpt.so 00:05:57.646 LIB libspdk_bdev_delay.a 00:05:57.646 SO libspdk_bdev_passthru.so.6.0 00:05:57.646 SO libspdk_bdev_malloc.so.6.0 00:05:57.646 SYMLINK libspdk_bdev_null.so 00:05:57.646 SO libspdk_bdev_delay.so.6.0 00:05:57.646 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:57.904 SYMLINK libspdk_bdev_passthru.so 00:05:57.904 SYMLINK libspdk_bdev_delay.so 00:05:57.904 SYMLINK libspdk_bdev_malloc.so 00:05:57.904 CC module/bdev/raid/bdev_raid.o 00:05:57.904 CC module/bdev/nvme/bdev_mdns_client.o 00:05:57.904 CC module/bdev/nvme/vbdev_opal.o 00:05:57.904 CC module/bdev/split/vbdev_split.o 00:05:58.162 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:58.162 CC module/bdev/xnvme/bdev_xnvme.o 00:05:58.162 CC module/bdev/aio/bdev_aio.o 00:05:58.162 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:58.162 LIB libspdk_bdev_lvol.a 00:05:58.162 CC module/bdev/split/vbdev_split_rpc.o 00:05:58.162 SO libspdk_bdev_lvol.so.6.0 00:05:58.420 CC module/bdev/raid/bdev_raid_rpc.o 00:05:58.420 SYMLINK libspdk_bdev_lvol.so 00:05:58.420 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:58.420 LIB libspdk_bdev_split.a 00:05:58.679 SO libspdk_bdev_split.so.6.0 00:05:58.679 CC module/bdev/ftl/bdev_ftl.o 00:05:58.679 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:58.679 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:58.679 SYMLINK libspdk_bdev_split.so 00:05:58.679 CC module/bdev/raid/bdev_raid_sb.o 00:05:58.679 CC module/bdev/aio/bdev_aio_rpc.o 00:05:58.679 LIB libspdk_bdev_xnvme.a 00:05:58.679 CC module/bdev/iscsi/bdev_iscsi.o 00:05:58.937 SO libspdk_bdev_xnvme.so.3.0 00:05:58.937 LIB libspdk_bdev_zone_block.a 00:05:58.937 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:58.937 SO libspdk_bdev_zone_block.so.6.0 00:05:58.937 LIB libspdk_bdev_aio.a 00:05:58.937 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:58.937 SYMLINK libspdk_bdev_xnvme.so 00:05:58.937 CC module/bdev/raid/raid0.o 00:05:58.937 SO libspdk_bdev_aio.so.6.0 00:05:58.937 SYMLINK libspdk_bdev_zone_block.so 00:05:58.937 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:59.195 SYMLINK libspdk_bdev_aio.so 00:05:59.195 CC module/bdev/raid/raid1.o 00:05:59.195 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:59.195 LIB libspdk_bdev_ftl.a 00:05:59.195 SO libspdk_bdev_ftl.so.6.0 00:05:59.195 CC module/bdev/raid/concat.o 00:05:59.195 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:59.453 SYMLINK libspdk_bdev_ftl.so 00:05:59.453 LIB libspdk_bdev_iscsi.a 00:05:59.453 SO libspdk_bdev_iscsi.so.6.0 00:05:59.712 SYMLINK libspdk_bdev_iscsi.so 00:05:59.712 LIB libspdk_bdev_raid.a 00:05:59.712 LIB libspdk_bdev_virtio.a 00:05:59.712 SO libspdk_bdev_raid.so.6.0 00:05:59.712 SO libspdk_bdev_virtio.so.6.0 00:05:59.970 SYMLINK libspdk_bdev_raid.so 00:05:59.970 SYMLINK libspdk_bdev_virtio.so 00:06:00.903 LIB libspdk_bdev_nvme.a 00:06:00.903 SO libspdk_bdev_nvme.so.7.0 00:06:01.161 SYMLINK libspdk_bdev_nvme.so 00:06:01.726 CC module/event/subsystems/keyring/keyring.o 00:06:01.726 CC module/event/subsystems/sock/sock.o 00:06:01.726 CC module/event/subsystems/vmd/vmd.o 00:06:01.726 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:01.726 CC module/event/subsystems/iobuf/iobuf.o 00:06:01.726 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:01.726 CC module/event/subsystems/scheduler/scheduler.o 00:06:01.726 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:01.726 LIB libspdk_event_sock.a 00:06:01.726 LIB libspdk_event_vmd.a 00:06:01.726 LIB libspdk_event_vhost_blk.a 00:06:01.726 LIB libspdk_event_keyring.a 00:06:01.726 SO libspdk_event_sock.so.5.0 00:06:01.726 LIB libspdk_event_iobuf.a 00:06:01.726 SO libspdk_event_vhost_blk.so.3.0 00:06:01.726 SO libspdk_event_vmd.so.6.0 00:06:01.984 LIB libspdk_event_scheduler.a 00:06:01.984 SO libspdk_event_keyring.so.1.0 00:06:01.984 SO libspdk_event_iobuf.so.3.0 00:06:01.984 SYMLINK libspdk_event_sock.so 00:06:01.984 SYMLINK libspdk_event_vhost_blk.so 00:06:01.984 SO libspdk_event_scheduler.so.4.0 00:06:01.984 SYMLINK libspdk_event_vmd.so 00:06:01.984 SYMLINK libspdk_event_keyring.so 00:06:01.984 SYMLINK libspdk_event_iobuf.so 00:06:01.984 SYMLINK libspdk_event_scheduler.so 00:06:02.242 CC module/event/subsystems/accel/accel.o 00:06:02.500 LIB libspdk_event_accel.a 00:06:02.500 SO libspdk_event_accel.so.6.0 00:06:02.500 SYMLINK libspdk_event_accel.so 00:06:02.758 CC module/event/subsystems/bdev/bdev.o 00:06:03.016 LIB libspdk_event_bdev.a 00:06:03.016 SO libspdk_event_bdev.so.6.0 00:06:03.274 SYMLINK libspdk_event_bdev.so 00:06:03.532 CC module/event/subsystems/ublk/ublk.o 00:06:03.532 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:03.532 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:03.532 CC module/event/subsystems/nbd/nbd.o 00:06:03.532 CC module/event/subsystems/scsi/scsi.o 00:06:03.532 LIB libspdk_event_ublk.a 00:06:03.532 LIB libspdk_event_nbd.a 00:06:03.532 SO libspdk_event_ublk.so.3.0 00:06:03.532 LIB libspdk_event_scsi.a 00:06:03.532 SO libspdk_event_nbd.so.6.0 00:06:03.790 SO libspdk_event_scsi.so.6.0 00:06:03.790 SYMLINK libspdk_event_ublk.so 00:06:03.790 SYMLINK libspdk_event_nbd.so 00:06:03.790 LIB libspdk_event_nvmf.a 00:06:03.790 SYMLINK libspdk_event_scsi.so 00:06:03.790 SO libspdk_event_nvmf.so.6.0 00:06:03.790 SYMLINK libspdk_event_nvmf.so 00:06:04.048 CC module/event/subsystems/iscsi/iscsi.o 00:06:04.048 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:04.048 LIB libspdk_event_iscsi.a 00:06:04.306 LIB libspdk_event_vhost_scsi.a 00:06:04.306 SO libspdk_event_iscsi.so.6.0 00:06:04.306 SO libspdk_event_vhost_scsi.so.3.0 00:06:04.306 SYMLINK libspdk_event_iscsi.so 00:06:04.306 SYMLINK libspdk_event_vhost_scsi.so 00:06:04.306 SO libspdk.so.6.0 00:06:04.564 SYMLINK libspdk.so 00:06:04.565 CC app/trace_record/trace_record.o 00:06:04.822 TEST_HEADER include/spdk/accel.h 00:06:04.822 TEST_HEADER include/spdk/accel_module.h 00:06:04.822 TEST_HEADER include/spdk/assert.h 00:06:04.822 TEST_HEADER include/spdk/barrier.h 00:06:04.822 TEST_HEADER include/spdk/base64.h 00:06:04.822 CXX app/trace/trace.o 00:06:04.822 TEST_HEADER include/spdk/bdev.h 00:06:04.822 TEST_HEADER include/spdk/bdev_module.h 00:06:04.822 TEST_HEADER include/spdk/bdev_zone.h 00:06:04.822 TEST_HEADER include/spdk/bit_array.h 00:06:04.822 TEST_HEADER include/spdk/bit_pool.h 00:06:04.822 TEST_HEADER include/spdk/blob_bdev.h 00:06:04.822 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:04.822 TEST_HEADER include/spdk/blobfs.h 00:06:04.822 TEST_HEADER include/spdk/blob.h 00:06:04.822 TEST_HEADER include/spdk/conf.h 00:06:04.822 TEST_HEADER include/spdk/config.h 00:06:04.822 TEST_HEADER include/spdk/cpuset.h 00:06:04.822 TEST_HEADER include/spdk/crc16.h 00:06:04.822 CC app/iscsi_tgt/iscsi_tgt.o 00:06:04.822 TEST_HEADER include/spdk/crc32.h 00:06:04.822 CC app/nvmf_tgt/nvmf_main.o 00:06:04.822 TEST_HEADER include/spdk/crc64.h 00:06:04.822 CC app/spdk_tgt/spdk_tgt.o 00:06:04.822 TEST_HEADER include/spdk/dif.h 00:06:04.822 TEST_HEADER include/spdk/dma.h 00:06:04.822 TEST_HEADER include/spdk/endian.h 00:06:04.822 TEST_HEADER include/spdk/env_dpdk.h 00:06:04.822 TEST_HEADER include/spdk/env.h 00:06:04.822 TEST_HEADER include/spdk/event.h 00:06:04.822 TEST_HEADER include/spdk/fd_group.h 00:06:04.822 TEST_HEADER include/spdk/fd.h 00:06:04.822 TEST_HEADER include/spdk/file.h 00:06:04.822 TEST_HEADER include/spdk/ftl.h 00:06:04.822 TEST_HEADER include/spdk/gpt_spec.h 00:06:04.822 TEST_HEADER include/spdk/hexlify.h 00:06:04.822 TEST_HEADER include/spdk/histogram_data.h 00:06:04.822 TEST_HEADER include/spdk/idxd.h 00:06:04.822 TEST_HEADER include/spdk/idxd_spec.h 00:06:04.822 TEST_HEADER include/spdk/init.h 00:06:04.822 TEST_HEADER include/spdk/ioat.h 00:06:04.822 TEST_HEADER include/spdk/ioat_spec.h 00:06:04.822 CC examples/util/zipf/zipf.o 00:06:04.822 TEST_HEADER include/spdk/iscsi_spec.h 00:06:04.822 TEST_HEADER include/spdk/json.h 00:06:04.822 CC test/thread/poller_perf/poller_perf.o 00:06:04.822 TEST_HEADER include/spdk/jsonrpc.h 00:06:04.822 TEST_HEADER include/spdk/keyring.h 00:06:04.822 TEST_HEADER include/spdk/keyring_module.h 00:06:04.822 TEST_HEADER include/spdk/likely.h 00:06:04.822 TEST_HEADER include/spdk/log.h 00:06:04.822 TEST_HEADER include/spdk/lvol.h 00:06:04.822 TEST_HEADER include/spdk/memory.h 00:06:04.822 TEST_HEADER include/spdk/mmio.h 00:06:04.822 TEST_HEADER include/spdk/nbd.h 00:06:04.823 TEST_HEADER include/spdk/net.h 00:06:04.823 TEST_HEADER include/spdk/notify.h 00:06:04.823 TEST_HEADER include/spdk/nvme.h 00:06:04.823 TEST_HEADER include/spdk/nvme_intel.h 00:06:04.823 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:04.823 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:04.823 TEST_HEADER include/spdk/nvme_spec.h 00:06:04.823 TEST_HEADER include/spdk/nvme_zns.h 00:06:04.823 CC test/dma/test_dma/test_dma.o 00:06:04.823 CC test/app/bdev_svc/bdev_svc.o 00:06:04.823 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:04.823 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:04.823 TEST_HEADER include/spdk/nvmf.h 00:06:04.823 TEST_HEADER include/spdk/nvmf_spec.h 00:06:04.823 TEST_HEADER include/spdk/nvmf_transport.h 00:06:04.823 TEST_HEADER include/spdk/opal.h 00:06:04.823 TEST_HEADER include/spdk/opal_spec.h 00:06:04.823 TEST_HEADER include/spdk/pci_ids.h 00:06:04.823 TEST_HEADER include/spdk/pipe.h 00:06:04.823 TEST_HEADER include/spdk/queue.h 00:06:04.823 TEST_HEADER include/spdk/reduce.h 00:06:04.823 TEST_HEADER include/spdk/rpc.h 00:06:04.823 TEST_HEADER include/spdk/scheduler.h 00:06:04.823 TEST_HEADER include/spdk/scsi.h 00:06:04.823 TEST_HEADER include/spdk/scsi_spec.h 00:06:04.823 TEST_HEADER include/spdk/sock.h 00:06:04.823 TEST_HEADER include/spdk/stdinc.h 00:06:04.823 TEST_HEADER include/spdk/string.h 00:06:04.823 TEST_HEADER include/spdk/thread.h 00:06:04.823 TEST_HEADER include/spdk/trace.h 00:06:04.823 TEST_HEADER include/spdk/trace_parser.h 00:06:04.823 TEST_HEADER include/spdk/tree.h 00:06:05.081 TEST_HEADER include/spdk/ublk.h 00:06:05.081 TEST_HEADER include/spdk/util.h 00:06:05.081 TEST_HEADER include/spdk/uuid.h 00:06:05.081 TEST_HEADER include/spdk/version.h 00:06:05.081 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:05.081 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:05.081 TEST_HEADER include/spdk/vhost.h 00:06:05.081 TEST_HEADER include/spdk/vmd.h 00:06:05.081 TEST_HEADER include/spdk/xor.h 00:06:05.081 TEST_HEADER include/spdk/zipf.h 00:06:05.081 CXX test/cpp_headers/accel.o 00:06:05.081 LINK nvmf_tgt 00:06:05.081 LINK iscsi_tgt 00:06:05.081 LINK spdk_tgt 00:06:05.081 LINK zipf 00:06:05.081 LINK poller_perf 00:06:05.081 LINK bdev_svc 00:06:05.081 LINK spdk_trace_record 00:06:05.081 CXX test/cpp_headers/accel_module.o 00:06:05.338 LINK spdk_trace 00:06:05.338 CXX test/cpp_headers/assert.o 00:06:05.338 CXX test/cpp_headers/barrier.o 00:06:05.338 CXX test/cpp_headers/base64.o 00:06:05.338 CXX test/cpp_headers/bdev.o 00:06:05.338 LINK test_dma 00:06:05.338 CC app/spdk_lspci/spdk_lspci.o 00:06:05.338 CC examples/ioat/perf/perf.o 00:06:05.596 CC test/app/histogram_perf/histogram_perf.o 00:06:05.596 CXX test/cpp_headers/bdev_module.o 00:06:05.596 CC test/app/jsoncat/jsoncat.o 00:06:05.596 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:05.596 CC test/app/stub/stub.o 00:06:05.596 LINK spdk_lspci 00:06:05.596 CC app/spdk_nvme_perf/perf.o 00:06:05.596 CC examples/vmd/lsvmd/lsvmd.o 00:06:05.596 LINK ioat_perf 00:06:05.596 LINK histogram_perf 00:06:05.854 LINK jsoncat 00:06:05.854 LINK stub 00:06:05.854 CXX test/cpp_headers/bdev_zone.o 00:06:05.854 LINK lsvmd 00:06:05.854 CC examples/ioat/verify/verify.o 00:06:06.112 CC test/event/event_perf/event_perf.o 00:06:06.112 CXX test/cpp_headers/bit_array.o 00:06:06.112 CC examples/vmd/led/led.o 00:06:06.112 CC test/env/mem_callbacks/mem_callbacks.o 00:06:06.112 CC app/spdk_nvme_identify/identify.o 00:06:06.112 LINK event_perf 00:06:06.112 CC app/spdk_nvme_discover/discovery_aer.o 00:06:06.112 LINK nvme_fuzz 00:06:06.112 CXX test/cpp_headers/bit_pool.o 00:06:06.112 CC app/spdk_top/spdk_top.o 00:06:06.112 LINK verify 00:06:06.112 LINK led 00:06:06.369 CXX test/cpp_headers/blob_bdev.o 00:06:06.369 LINK spdk_nvme_discover 00:06:06.369 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:06.369 CC test/event/reactor/reactor.o 00:06:06.627 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:06.627 CXX test/cpp_headers/blobfs_bdev.o 00:06:06.627 CC examples/idxd/perf/perf.o 00:06:06.627 LINK reactor 00:06:06.627 LINK spdk_nvme_perf 00:06:06.627 LINK mem_callbacks 00:06:06.886 CC app/vhost/vhost.o 00:06:06.886 CXX test/cpp_headers/blobfs.o 00:06:06.886 LINK interrupt_tgt 00:06:06.886 CC test/event/reactor_perf/reactor_perf.o 00:06:06.886 LINK vhost 00:06:07.144 CXX test/cpp_headers/blob.o 00:06:07.144 CC test/env/vtophys/vtophys.o 00:06:07.144 CC app/spdk_dd/spdk_dd.o 00:06:07.144 LINK idxd_perf 00:06:07.144 LINK reactor_perf 00:06:07.144 LINK spdk_nvme_identify 00:06:07.144 CXX test/cpp_headers/conf.o 00:06:07.144 LINK vtophys 00:06:07.402 CC test/event/app_repeat/app_repeat.o 00:06:07.402 CC test/event/scheduler/scheduler.o 00:06:07.402 CXX test/cpp_headers/config.o 00:06:07.402 LINK spdk_top 00:06:07.402 CXX test/cpp_headers/cpuset.o 00:06:07.402 LINK app_repeat 00:06:07.661 CC app/fio/nvme/fio_plugin.o 00:06:07.661 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:07.661 CC examples/thread/thread/thread_ex.o 00:06:07.661 LINK spdk_dd 00:06:07.661 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:07.661 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:07.661 CXX test/cpp_headers/crc16.o 00:06:07.661 LINK scheduler 00:06:07.661 LINK env_dpdk_post_init 00:06:07.920 CXX test/cpp_headers/crc32.o 00:06:07.920 CC test/nvme/aer/aer.o 00:06:07.920 LINK thread 00:06:07.920 CC examples/sock/hello_world/hello_sock.o 00:06:07.920 CC test/nvme/reset/reset.o 00:06:07.920 CC test/env/memory/memory_ut.o 00:06:08.178 CXX test/cpp_headers/crc64.o 00:06:08.178 CC test/rpc_client/rpc_client_test.o 00:06:08.178 LINK aer 00:06:08.178 LINK hello_sock 00:06:08.178 LINK vhost_fuzz 00:06:08.178 CXX test/cpp_headers/dif.o 00:06:08.178 LINK reset 00:06:08.434 CC test/nvme/sgl/sgl.o 00:06:08.434 LINK rpc_client_test 00:06:08.434 LINK spdk_nvme 00:06:08.434 CXX test/cpp_headers/dma.o 00:06:08.692 CC test/nvme/e2edp/nvme_dp.o 00:06:08.692 CC test/nvme/overhead/overhead.o 00:06:08.692 CC app/fio/bdev/fio_plugin.o 00:06:08.692 CC test/nvme/err_injection/err_injection.o 00:06:08.692 LINK sgl 00:06:08.692 CXX test/cpp_headers/endian.o 00:06:08.692 LINK iscsi_fuzz 00:06:08.692 CC examples/accel/perf/accel_perf.o 00:06:08.692 CC examples/blob/hello_world/hello_blob.o 00:06:08.951 LINK err_injection 00:06:08.951 LINK nvme_dp 00:06:08.951 CXX test/cpp_headers/env_dpdk.o 00:06:08.951 LINK overhead 00:06:08.951 LINK hello_blob 00:06:08.951 CC test/nvme/startup/startup.o 00:06:09.209 CXX test/cpp_headers/env.o 00:06:09.209 CC test/nvme/reserve/reserve.o 00:06:09.209 CC test/nvme/connect_stress/connect_stress.o 00:06:09.209 CC test/nvme/simple_copy/simple_copy.o 00:06:09.209 LINK startup 00:06:09.209 CC test/nvme/boot_partition/boot_partition.o 00:06:09.468 LINK accel_perf 00:06:09.468 LINK spdk_bdev 00:06:09.468 CXX test/cpp_headers/event.o 00:06:09.468 LINK reserve 00:06:09.468 CC examples/blob/cli/blobcli.o 00:06:09.468 LINK connect_stress 00:06:09.468 LINK memory_ut 00:06:09.468 LINK boot_partition 00:06:09.468 LINK simple_copy 00:06:09.468 CC test/nvme/compliance/nvme_compliance.o 00:06:09.468 CXX test/cpp_headers/fd_group.o 00:06:09.727 CC test/nvme/fused_ordering/fused_ordering.o 00:06:09.727 CC test/env/pci/pci_ut.o 00:06:09.727 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:09.727 CXX test/cpp_headers/fd.o 00:06:09.727 CC test/nvme/fdp/fdp.o 00:06:09.727 CC test/nvme/cuse/cuse.o 00:06:09.727 LINK fused_ordering 00:06:09.985 CC examples/nvme/reconnect/reconnect.o 00:06:09.985 CC examples/nvme/hello_world/hello_world.o 00:06:09.985 CXX test/cpp_headers/file.o 00:06:09.985 LINK blobcli 00:06:09.985 LINK doorbell_aers 00:06:09.985 LINK nvme_compliance 00:06:09.985 CXX test/cpp_headers/ftl.o 00:06:10.243 CXX test/cpp_headers/gpt_spec.o 00:06:10.243 LINK hello_world 00:06:10.243 LINK fdp 00:06:10.243 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:10.243 LINK pci_ut 00:06:10.243 LINK reconnect 00:06:10.501 CXX test/cpp_headers/hexlify.o 00:06:10.501 CC test/accel/dif/dif.o 00:06:10.501 CC test/blobfs/mkfs/mkfs.o 00:06:10.501 CC examples/nvme/arbitration/arbitration.o 00:06:10.501 CC examples/bdev/hello_world/hello_bdev.o 00:06:10.501 CC examples/nvme/hotplug/hotplug.o 00:06:10.759 CXX test/cpp_headers/histogram_data.o 00:06:10.759 CXX test/cpp_headers/idxd.o 00:06:10.759 LINK mkfs 00:06:10.759 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:10.759 LINK hello_bdev 00:06:10.759 LINK hotplug 00:06:11.018 LINK arbitration 00:06:11.018 CXX test/cpp_headers/idxd_spec.o 00:06:11.018 LINK nvme_manage 00:06:11.018 CC examples/nvme/abort/abort.o 00:06:11.018 LINK cmb_copy 00:06:11.018 LINK dif 00:06:11.018 CXX test/cpp_headers/init.o 00:06:11.018 CXX test/cpp_headers/ioat.o 00:06:11.018 CXX test/cpp_headers/ioat_spec.o 00:06:11.328 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:11.328 CC examples/bdev/bdevperf/bdevperf.o 00:06:11.328 CXX test/cpp_headers/iscsi_spec.o 00:06:11.328 CC test/lvol/esnap/esnap.o 00:06:11.328 CXX test/cpp_headers/json.o 00:06:11.328 CXX test/cpp_headers/jsonrpc.o 00:06:11.328 CXX test/cpp_headers/keyring.o 00:06:11.328 CXX test/cpp_headers/keyring_module.o 00:06:11.328 LINK pmr_persistence 00:06:11.328 CXX test/cpp_headers/likely.o 00:06:11.328 LINK abort 00:06:11.586 CXX test/cpp_headers/log.o 00:06:11.586 CXX test/cpp_headers/lvol.o 00:06:11.586 CXX test/cpp_headers/memory.o 00:06:11.586 CXX test/cpp_headers/mmio.o 00:06:11.586 CXX test/cpp_headers/nbd.o 00:06:11.586 CXX test/cpp_headers/net.o 00:06:11.586 CXX test/cpp_headers/notify.o 00:06:11.586 CXX test/cpp_headers/nvme.o 00:06:11.586 LINK cuse 00:06:11.586 CXX test/cpp_headers/nvme_intel.o 00:06:11.586 CXX test/cpp_headers/nvme_ocssd.o 00:06:11.586 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:11.845 CXX test/cpp_headers/nvme_spec.o 00:06:11.845 CXX test/cpp_headers/nvme_zns.o 00:06:11.845 CXX test/cpp_headers/nvmf_cmd.o 00:06:11.845 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:11.845 CC test/bdev/bdevio/bdevio.o 00:06:11.845 CXX test/cpp_headers/nvmf.o 00:06:11.845 CXX test/cpp_headers/nvmf_spec.o 00:06:11.845 CXX test/cpp_headers/nvmf_transport.o 00:06:11.845 CXX test/cpp_headers/opal.o 00:06:12.103 CXX test/cpp_headers/opal_spec.o 00:06:12.103 CXX test/cpp_headers/pci_ids.o 00:06:12.103 CXX test/cpp_headers/pipe.o 00:06:12.103 CXX test/cpp_headers/queue.o 00:06:12.103 CXX test/cpp_headers/reduce.o 00:06:12.103 CXX test/cpp_headers/rpc.o 00:06:12.103 LINK bdevperf 00:06:12.103 CXX test/cpp_headers/scheduler.o 00:06:12.103 CXX test/cpp_headers/scsi.o 00:06:12.103 CXX test/cpp_headers/scsi_spec.o 00:06:12.362 CXX test/cpp_headers/sock.o 00:06:12.362 CXX test/cpp_headers/stdinc.o 00:06:12.362 CXX test/cpp_headers/string.o 00:06:12.362 CXX test/cpp_headers/thread.o 00:06:12.362 CXX test/cpp_headers/trace.o 00:06:12.362 LINK bdevio 00:06:12.362 CXX test/cpp_headers/trace_parser.o 00:06:12.362 CXX test/cpp_headers/tree.o 00:06:12.362 CXX test/cpp_headers/ublk.o 00:06:12.362 CXX test/cpp_headers/util.o 00:06:12.362 CXX test/cpp_headers/uuid.o 00:06:12.362 CXX test/cpp_headers/version.o 00:06:12.620 CXX test/cpp_headers/vfio_user_pci.o 00:06:12.620 CXX test/cpp_headers/vfio_user_spec.o 00:06:12.620 CXX test/cpp_headers/vhost.o 00:06:12.620 CXX test/cpp_headers/vmd.o 00:06:12.620 CXX test/cpp_headers/xor.o 00:06:12.620 CC examples/nvmf/nvmf/nvmf.o 00:06:12.620 CXX test/cpp_headers/zipf.o 00:06:12.878 LINK nvmf 00:06:19.436 LINK esnap 00:06:19.436 00:06:19.436 real 1m39.253s 00:06:19.436 user 9m28.454s 00:06:19.436 sys 2m11.299s 00:06:19.436 16:57:11 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:19.436 16:57:11 make -- common/autotest_common.sh@10 -- $ set +x 00:06:19.436 ************************************ 00:06:19.436 END TEST make 00:06:19.436 ************************************ 00:06:19.436 16:57:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:19.436 16:57:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:19.436 16:57:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:19.436 16:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.436 16:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:19.436 16:57:11 -- pm/common@44 -- $ pid=5240 00:06:19.436 16:57:11 -- pm/common@50 -- $ kill -TERM 5240 00:06:19.436 16:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.436 16:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:19.436 16:57:11 -- pm/common@44 -- $ pid=5242 00:06:19.436 16:57:11 -- pm/common@50 -- $ kill -TERM 5242 00:06:19.436 16:57:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.436 16:57:11 -- nvmf/common.sh@7 -- # uname -s 00:06:19.436 16:57:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.436 16:57:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.436 16:57:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.436 16:57:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.436 16:57:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.436 16:57:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.436 16:57:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.436 16:57:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.436 16:57:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.436 16:57:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.436 16:57:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:06:19.436 16:57:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:06:19.436 16:57:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.436 16:57:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.436 16:57:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.436 16:57:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.436 16:57:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.436 16:57:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.436 16:57:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.436 16:57:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.436 16:57:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 16:57:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 16:57:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 16:57:11 -- paths/export.sh@5 -- # export PATH 00:06:19.436 16:57:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 16:57:11 -- nvmf/common.sh@47 -- # : 0 00:06:19.436 16:57:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.436 16:57:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.436 16:57:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.436 16:57:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.436 16:57:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.436 16:57:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.437 16:57:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.437 16:57:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.437 16:57:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:19.437 16:57:11 -- spdk/autotest.sh@32 -- # uname -s 00:06:19.437 16:57:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:19.437 16:57:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:19.437 16:57:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:19.437 16:57:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:19.437 16:57:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:19.437 16:57:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:19.437 16:57:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:19.437 16:57:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:19.437 16:57:11 -- spdk/autotest.sh@48 -- # udevadm_pid=53993 00:06:19.437 16:57:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:19.437 16:57:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:19.437 16:57:11 -- pm/common@17 -- # local monitor 00:06:19.437 16:57:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.437 16:57:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:19.437 16:57:11 -- pm/common@25 -- # sleep 1 00:06:19.437 16:57:11 -- pm/common@21 -- # date +%s 00:06:19.437 16:57:11 -- pm/common@21 -- # date +%s 00:06:19.437 16:57:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721926631 00:06:19.437 16:57:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721926631 00:06:19.437 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721926631_collect-cpu-load.pm.log 00:06:19.437 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721926631_collect-vmstat.pm.log 00:06:20.370 16:57:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:20.370 16:57:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:20.370 16:57:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.370 16:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.628 16:57:12 -- spdk/autotest.sh@59 -- # create_test_list 00:06:20.628 16:57:12 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:20.628 16:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.628 16:57:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:20.628 16:57:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:20.628 16:57:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:20.628 16:57:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:20.628 16:57:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:20.628 16:57:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:20.628 16:57:12 -- common/autotest_common.sh@1455 -- # uname 00:06:20.628 16:57:12 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:20.628 16:57:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:20.628 16:57:12 -- common/autotest_common.sh@1475 -- # uname 00:06:20.628 16:57:12 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:20.628 16:57:12 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:20.628 16:57:12 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:20.628 16:57:12 -- spdk/autotest.sh@72 -- # hash lcov 00:06:20.628 16:57:12 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:20.628 16:57:12 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:20.628 --rc lcov_branch_coverage=1 00:06:20.628 --rc lcov_function_coverage=1 00:06:20.628 --rc genhtml_branch_coverage=1 00:06:20.628 --rc genhtml_function_coverage=1 00:06:20.628 --rc genhtml_legend=1 00:06:20.628 --rc geninfo_all_blocks=1 00:06:20.628 ' 00:06:20.628 16:57:12 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:20.628 --rc lcov_branch_coverage=1 00:06:20.628 --rc lcov_function_coverage=1 00:06:20.628 --rc genhtml_branch_coverage=1 00:06:20.628 --rc genhtml_function_coverage=1 00:06:20.628 --rc genhtml_legend=1 00:06:20.628 --rc geninfo_all_blocks=1 00:06:20.628 ' 00:06:20.628 16:57:12 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:20.628 --rc lcov_branch_coverage=1 00:06:20.628 --rc lcov_function_coverage=1 00:06:20.628 --rc genhtml_branch_coverage=1 00:06:20.628 --rc genhtml_function_coverage=1 00:06:20.628 --rc genhtml_legend=1 00:06:20.628 --rc geninfo_all_blocks=1 00:06:20.628 --no-external' 00:06:20.628 16:57:12 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:20.628 --rc lcov_branch_coverage=1 00:06:20.628 --rc lcov_function_coverage=1 00:06:20.628 --rc genhtml_branch_coverage=1 00:06:20.628 --rc genhtml_function_coverage=1 00:06:20.628 --rc genhtml_legend=1 00:06:20.628 --rc geninfo_all_blocks=1 00:06:20.628 --no-external' 00:06:20.628 16:57:12 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:20.628 lcov: LCOV version 1.14 00:06:20.628 16:57:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:38.778 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:38.778 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:50.982 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:50.982 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:50.982 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:50.982 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:50.982 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:50.982 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:50.982 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:50.982 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:50.982 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:50.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:50.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:50.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:50.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:54.268 16:57:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:54.268 16:57:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:54.268 16:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.268 16:57:46 -- spdk/autotest.sh@91 -- # rm -f 00:06:54.268 16:57:46 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:54.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.835 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:54.835 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:54.835 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:55.095 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:55.095 16:57:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:55.095 16:57:47 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:55.095 16:57:47 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:55.095 16:57:47 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2c2n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme2c2n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:55.095 16:57:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:55.095 16:57:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:55.095 16:57:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:55.095 16:57:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:55.095 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.095 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.095 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:55.095 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:55.095 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:55.095 No valid GPT data, bailing 00:06:55.095 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:55.095 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.095 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.095 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:55.095 1+0 records in 00:06:55.095 1+0 records out 00:06:55.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146698 s, 71.5 MB/s 00:06:55.095 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.095 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.095 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:55.095 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:55.095 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:55.095 No valid GPT data, bailing 00:06:55.095 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:55.095 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.095 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.095 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:55.095 1+0 records in 00:06:55.095 1+0 records out 00:06:55.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534103 s, 196 MB/s 00:06:55.095 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.095 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.095 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:06:55.095 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:06:55.095 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:55.354 No valid GPT data, bailing 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.354 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.354 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:55.354 1+0 records in 00:06:55.354 1+0 records out 00:06:55.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452677 s, 232 MB/s 00:06:55.354 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.354 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.354 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:06:55.354 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:06:55.354 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:55.354 No valid GPT data, bailing 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.354 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.354 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:55.354 1+0 records in 00:06:55.354 1+0 records out 00:06:55.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596366 s, 176 MB/s 00:06:55.354 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.354 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.354 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:06:55.354 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:06:55.354 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:55.354 No valid GPT data, bailing 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:55.354 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.354 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.354 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:55.354 1+0 records in 00:06:55.354 1+0 records out 00:06:55.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540523 s, 194 MB/s 00:06:55.354 16:57:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.354 16:57:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:55.354 16:57:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:06:55.354 16:57:47 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:06:55.354 16:57:47 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:55.612 No valid GPT data, bailing 00:06:55.612 16:57:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:55.612 16:57:47 -- scripts/common.sh@391 -- # pt= 00:06:55.612 16:57:47 -- scripts/common.sh@392 -- # return 1 00:06:55.612 16:57:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:55.612 1+0 records in 00:06:55.612 1+0 records out 00:06:55.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480892 s, 218 MB/s 00:06:55.612 16:57:47 -- spdk/autotest.sh@118 -- # sync 00:06:55.612 16:57:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:55.612 16:57:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:55.612 16:57:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:57.515 16:57:49 -- spdk/autotest.sh@124 -- # uname -s 00:06:57.515 16:57:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:57.515 16:57:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:57.515 16:57:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.515 16:57:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.515 16:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.515 ************************************ 00:06:57.515 START TEST setup.sh 00:06:57.515 ************************************ 00:06:57.515 16:57:49 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:57.515 * Looking for test storage... 00:06:57.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:57.515 16:57:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:57.515 16:57:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:57.515 16:57:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:57.515 16:57:49 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.515 16:57:49 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.515 16:57:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:57.515 ************************************ 00:06:57.515 START TEST acl 00:06:57.515 ************************************ 00:06:57.515 16:57:49 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:57.773 * Looking for test storage... 00:06:57.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:57.773 16:57:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2c2n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2c2n1 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.773 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:57.774 16:57:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:57.774 16:57:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:57.774 16:57:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:57.774 16:57:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:57.774 16:57:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:57.774 16:57:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:57.774 16:57:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:57.774 16:57:50 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:58.725 16:57:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:58.725 16:57:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:58.725 16:57:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:58.725 16:57:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:58.725 16:57:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.725 16:57:51 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:59.306 16:57:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:59.306 16:57:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:59.306 16:57:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:59.874 Hugepages 00:06:59.874 node hugesize free / total 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:59.874 00:06:59.874 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:59.874 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:00.132 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:07:00.390 16:57:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:07:00.390 16:57:52 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.390 16:57:52 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.390 16:57:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:00.390 ************************************ 00:07:00.390 START TEST denied 00:07:00.390 ************************************ 00:07:00.390 16:57:52 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:07:00.390 16:57:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:07:00.390 16:57:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:07:00.390 16:57:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:07:00.390 16:57:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:07:00.390 16:57:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:01.768 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:01.768 16:57:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:07:01.769 16:57:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:01.769 16:57:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:08.327 00:07:08.327 real 0m7.293s 00:07:08.327 user 0m0.856s 00:07:08.327 sys 0m1.464s 00:07:08.327 16:57:59 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.327 ************************************ 00:07:08.327 END TEST denied 00:07:08.327 16:57:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:07:08.327 ************************************ 00:07:08.327 16:57:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:07:08.327 16:57:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.327 16:57:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.327 16:57:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:08.327 ************************************ 00:07:08.327 START TEST allowed 00:07:08.327 ************************************ 00:07:08.327 16:57:59 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:07:08.327 16:57:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:07:08.327 16:57:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:07:08.327 16:57:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:07:08.327 16:57:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:07:08.327 16:57:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:08.894 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:08.894 16:58:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:09.827 00:07:09.827 real 0m2.264s 00:07:09.827 user 0m1.054s 00:07:09.827 sys 0m1.196s 00:07:09.827 16:58:02 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.827 16:58:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:07:09.827 ************************************ 00:07:09.827 END TEST allowed 00:07:09.827 ************************************ 00:07:09.827 ************************************ 00:07:09.827 END TEST acl 00:07:09.827 ************************************ 00:07:09.827 00:07:09.827 real 0m12.349s 00:07:09.827 user 0m3.154s 00:07:09.827 sys 0m4.190s 00:07:09.827 16:58:02 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.827 16:58:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 16:58:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:10.087 16:58:02 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.087 16:58:02 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.087 16:58:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 ************************************ 00:07:10.087 START TEST hugepages 00:07:10.087 ************************************ 00:07:10.087 16:58:02 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:07:10.087 * Looking for test storage... 00:07:10.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5816396 kB' 'MemAvailable: 7418192 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 444100 kB' 'Inactive: 1475024 kB' 'Active(anon): 112116 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103560 kB' 'Mapped: 48616 kB' 'Shmem: 10512 kB' 'KReclaimable: 63456 kB' 'Slab: 135980 kB' 'SReclaimable: 63456 kB' 'SUnreclaim: 72524 kB' 'KernelStack: 6336 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 325944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.087 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.088 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:10.089 16:58:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:07:10.089 16:58:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.089 16:58:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.089 16:58:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:10.089 ************************************ 00:07:10.089 START TEST default_setup 00:07:10.089 ************************************ 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:07:10.089 16:58:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.221 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.221 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.221 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.485 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935484 kB' 'MemAvailable: 9537092 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 462084 kB' 'Inactive: 1475044 kB' 'Active(anon): 130100 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121224 kB' 'Mapped: 48780 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135204 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72168 kB' 'KernelStack: 6368 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.486 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935052 kB' 'MemAvailable: 9536664 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461808 kB' 'Inactive: 1475048 kB' 'Active(anon): 129824 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120940 kB' 'Mapped: 48684 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135200 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6304 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.487 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.488 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934800 kB' 'MemAvailable: 9536416 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461380 kB' 'Inactive: 1475052 kB' 'Active(anon): 129396 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120756 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135204 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72168 kB' 'KernelStack: 6304 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.489 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.490 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.491 nr_hugepages=1024 00:07:11.491 resv_hugepages=0 00:07:11.491 surplus_hugepages=0 00:07:11.491 anon_hugepages=0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:11.491 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935440 kB' 'MemAvailable: 9537056 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461688 kB' 'Inactive: 1475052 kB' 'Active(anon): 129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135204 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72168 kB' 'KernelStack: 6320 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.492 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.493 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935188 kB' 'MemUsed: 4306788 kB' 'SwapCached: 0 kB' 'Active: 461640 kB' 'Inactive: 1475052 kB' 'Active(anon): 129656 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1817512 kB' 'Mapped: 48620 kB' 'AnonPages: 120756 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135204 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.494 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.495 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:07:11.496 node0=1024 expecting 1024 00:07:11.496 ************************************ 00:07:11.496 END TEST default_setup 00:07:11.496 ************************************ 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:11.496 00:07:11.496 real 0m1.456s 00:07:11.496 user 0m0.670s 00:07:11.496 sys 0m0.741s 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.496 16:58:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:07:11.763 16:58:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:07:11.763 16:58:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.763 16:58:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.763 16:58:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:11.763 ************************************ 00:07:11.763 START TEST per_node_1G_alloc 00:07:11.763 ************************************ 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.763 16:58:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.286 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8982780 kB' 'MemAvailable: 10584400 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 462060 kB' 'Inactive: 1475056 kB' 'Active(anon): 130076 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121216 kB' 'Mapped: 48956 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135204 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72168 kB' 'KernelStack: 6312 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.286 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.287 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8982780 kB' 'MemAvailable: 10584400 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461700 kB' 'Inactive: 1475056 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135212 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6320 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.288 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.289 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8983092 kB' 'MemAvailable: 10584712 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461656 kB' 'Inactive: 1475056 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135208 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 6320 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.290 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.291 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:12.292 nr_hugepages=512 00:07:12.292 resv_hugepages=0 00:07:12.292 surplus_hugepages=0 00:07:12.292 anon_hugepages=0 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:12.292 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8983344 kB' 'MemAvailable: 10584964 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461452 kB' 'Inactive: 1475056 kB' 'Active(anon): 129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135208 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 6304 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.293 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.294 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8984040 kB' 'MemUsed: 3257936 kB' 'SwapCached: 0 kB' 'Active: 461704 kB' 'Inactive: 1475056 kB' 'Active(anon): 129720 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1817512 kB' 'Mapped: 48620 kB' 'AnonPages: 120904 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135208 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.295 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.296 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:07:12.297 node0=512 expecting 512 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:12.297 00:07:12.297 real 0m0.727s 00:07:12.297 user 0m0.336s 00:07:12.297 sys 0m0.408s 00:07:12.297 ************************************ 00:07:12.297 END TEST per_node_1G_alloc 00:07:12.297 ************************************ 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.297 16:58:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.557 16:58:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:07:12.557 16:58:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.557 16:58:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.557 16:58:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:12.557 ************************************ 00:07:12.557 START TEST even_2G_alloc 00:07:12.557 ************************************ 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:12.557 16:58:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.079 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.079 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.079 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.079 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7932416 kB' 'MemAvailable: 9534036 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461968 kB' 'Inactive: 1475056 kB' 'Active(anon): 129984 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121384 kB' 'Mapped: 48856 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135264 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72228 kB' 'KernelStack: 6344 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.079 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.080 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7932164 kB' 'MemAvailable: 9533784 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461632 kB' 'Inactive: 1475056 kB' 'Active(anon): 129648 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121008 kB' 'Mapped: 48668 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135260 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72224 kB' 'KernelStack: 6312 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.081 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.082 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7932820 kB' 'MemAvailable: 9534440 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461760 kB' 'Inactive: 1475056 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135252 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72216 kB' 'KernelStack: 6320 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.083 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.084 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:13.085 nr_hugepages=1024 00:07:13.085 resv_hugepages=0 00:07:13.085 surplus_hugepages=0 00:07:13.085 anon_hugepages=0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.085 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7933748 kB' 'MemAvailable: 9535368 kB' 'Buffers: 2436 kB' 'Cached: 1815076 kB' 'SwapCached: 0 kB' 'Active: 461772 kB' 'Inactive: 1475056 kB' 'Active(anon): 129788 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 48620 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135252 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72216 kB' 'KernelStack: 6320 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.086 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.087 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7934428 kB' 'MemUsed: 4307548 kB' 'SwapCached: 0 kB' 'Active: 461744 kB' 'Inactive: 1475056 kB' 'Active(anon): 129760 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'FilePages: 1817512 kB' 'Mapped: 48620 kB' 'AnonPages: 120888 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135252 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.088 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.347 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:13.348 node0=1024 expecting 1024 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:13.348 00:07:13.348 real 0m0.778s 00:07:13.348 user 0m0.340s 00:07:13.348 sys 0m0.453s 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.348 16:58:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:13.348 ************************************ 00:07:13.348 END TEST even_2G_alloc 00:07:13.348 ************************************ 00:07:13.348 16:58:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:07:13.348 16:58:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.348 16:58:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.348 16:58:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:13.348 ************************************ 00:07:13.348 START TEST odd_alloc 00:07:13.348 ************************************ 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.348 16:58:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:13.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.867 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.867 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.867 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.867 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935476 kB' 'MemAvailable: 9537112 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 462156 kB' 'Inactive: 1475064 kB' 'Active(anon): 130172 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121448 kB' 'Mapped: 48872 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 135152 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6452 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.867 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.868 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935224 kB' 'MemAvailable: 9536860 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 461456 kB' 'Inactive: 1475064 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120944 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 135152 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6364 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.869 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.870 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935544 kB' 'MemAvailable: 9537180 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 461320 kB' 'Inactive: 1475064 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120828 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 135156 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6332 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.871 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.872 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:13.873 nr_hugepages=1025 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:07:13.873 resv_hugepages=0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:13.873 surplus_hugepages=0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:13.873 anon_hugepages=0 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935544 kB' 'MemAvailable: 9537180 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 461516 kB' 'Inactive: 1475064 kB' 'Active(anon): 129532 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120736 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 135152 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6316 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.873 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.874 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935544 kB' 'MemUsed: 4306432 kB' 'SwapCached: 0 kB' 'Active: 461520 kB' 'Inactive: 1475064 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1817520 kB' 'Mapped: 48624 kB' 'AnonPages: 120736 kB' 'Shmem: 10472 kB' 'KernelStack: 6316 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63052 kB' 'Slab: 135152 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:13.875 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.134 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.135 node0=1025 expecting 1025 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:07:14.135 00:07:14.135 real 0m0.730s 00:07:14.135 user 0m0.356s 00:07:14.135 sys 0m0.415s 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.135 ************************************ 00:07:14.135 END TEST odd_alloc 00:07:14.135 ************************************ 00:07:14.135 16:58:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.135 16:58:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:07:14.135 16:58:06 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.135 16:58:06 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.135 16:58:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.135 ************************************ 00:07:14.135 START TEST custom_alloc 00:07:14.135 ************************************ 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.135 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.657 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.657 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.657 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.657 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8986760 kB' 'MemAvailable: 10588396 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 459444 kB' 'Inactive: 1475064 kB' 'Active(anon): 127460 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 118360 kB' 'Mapped: 48076 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 135156 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 72104 kB' 'KernelStack: 6296 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.657 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.658 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.659 16:58:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8986760 kB' 'MemAvailable: 10588388 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 459272 kB' 'Inactive: 1475064 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118392 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135148 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72112 kB' 'KernelStack: 6256 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.659 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.660 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8986760 kB' 'MemAvailable: 10588388 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 459040 kB' 'Inactive: 1475064 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118164 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135136 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6224 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.661 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.662 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:14.663 nr_hugepages=512 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:07:14.663 resv_hugepages=0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:14.663 surplus_hugepages=0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:14.663 anon_hugepages=0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8986760 kB' 'MemAvailable: 10588388 kB' 'Buffers: 2436 kB' 'Cached: 1815084 kB' 'SwapCached: 0 kB' 'Active: 459208 kB' 'Inactive: 1475064 kB' 'Active(anon): 127224 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135136 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72100 kB' 'KernelStack: 6240 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.663 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.664 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8986760 kB' 'MemUsed: 3255216 kB' 'SwapCached: 0 kB' 'Active: 458980 kB' 'Inactive: 1475060 kB' 'Active(anon): 126996 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1817516 kB' 'Mapped: 47880 kB' 'AnonPages: 117904 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135108 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.665 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:14.666 node0=512 expecting 512 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:07:14.666 00:07:14.666 real 0m0.708s 00:07:14.666 user 0m0.326s 00:07:14.666 sys 0m0.433s 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.666 16:58:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.666 ************************************ 00:07:14.666 END TEST custom_alloc 00:07:14.666 ************************************ 00:07:14.924 16:58:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:14.924 16:58:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.924 16:58:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.924 16:58:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:14.924 ************************************ 00:07:14.924 START TEST no_shrink_alloc 00:07:14.924 ************************************ 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:14.924 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:14.925 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:15.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:15.445 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.446 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.446 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.446 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936540 kB' 'MemAvailable: 9538164 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459332 kB' 'Inactive: 1475060 kB' 'Active(anon): 127348 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118412 kB' 'Mapped: 47920 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135052 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6264 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.446 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936540 kB' 'MemAvailable: 9538164 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459316 kB' 'Inactive: 1475060 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118396 kB' 'Mapped: 47976 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6208 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.447 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.448 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936540 kB' 'MemAvailable: 9538164 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459204 kB' 'Inactive: 1475060 kB' 'Active(anon): 127220 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118320 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6240 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.449 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.450 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:15.451 nr_hugepages=1024 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:15.451 resv_hugepages=0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:15.451 surplus_hugepages=0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:15.451 anon_hugepages=0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.451 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936540 kB' 'MemAvailable: 9538164 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 458828 kB' 'Inactive: 1475060 kB' 'Active(anon): 126844 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118000 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6240 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.452 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:15.453 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936540 kB' 'MemUsed: 4305436 kB' 'SwapCached: 0 kB' 'Active: 458796 kB' 'Inactive: 1475060 kB' 'Active(anon): 126812 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1817516 kB' 'Mapped: 47880 kB' 'AnonPages: 118228 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.454 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:15.455 node0=1024 expecting 1024 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:07:15.455 16:58:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:16.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.024 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:16.024 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:16.024 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:16.024 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:16.024 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.024 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7935796 kB' 'MemAvailable: 9537420 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459856 kB' 'Inactive: 1475060 kB' 'Active(anon): 127872 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118956 kB' 'Mapped: 48028 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135040 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72004 kB' 'KernelStack: 6344 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.025 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936048 kB' 'MemAvailable: 9537672 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459264 kB' 'Inactive: 1475060 kB' 'Active(anon): 127280 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118380 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6256 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.026 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.027 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936048 kB' 'MemAvailable: 9537672 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459172 kB' 'Inactive: 1475060 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6256 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.028 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.029 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.029 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.029 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.290 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:07:16.291 nr_hugepages=1024 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:16.291 resv_hugepages=0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:16.291 surplus_hugepages=0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:16.291 anon_hugepages=0 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.291 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936048 kB' 'MemAvailable: 9537672 kB' 'Buffers: 2436 kB' 'Cached: 1815080 kB' 'SwapCached: 0 kB' 'Active: 459140 kB' 'Inactive: 1475060 kB' 'Active(anon): 127156 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6256 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.292 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.293 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7936048 kB' 'MemUsed: 4305928 kB' 'SwapCached: 0 kB' 'Active: 458896 kB' 'Inactive: 1475060 kB' 'Active(anon): 126912 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1475060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1817516 kB' 'Mapped: 47880 kB' 'AnonPages: 118300 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63036 kB' 'Slab: 135044 kB' 'SReclaimable: 63036 kB' 'SUnreclaim: 72008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.294 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:16.295 node0=1024 expecting 1024 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:16.295 00:07:16.295 real 0m1.405s 00:07:16.295 user 0m0.654s 00:07:16.295 sys 0m0.846s 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.295 16:58:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:07:16.295 ************************************ 00:07:16.295 END TEST no_shrink_alloc 00:07:16.295 ************************************ 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:16.295 16:58:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:16.295 00:07:16.295 real 0m6.288s 00:07:16.295 user 0m2.848s 00:07:16.295 sys 0m3.576s 00:07:16.295 16:58:08 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.295 ************************************ 00:07:16.295 END TEST hugepages 00:07:16.295 ************************************ 00:07:16.295 16:58:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:07:16.295 16:58:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:16.295 16:58:08 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.295 16:58:08 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.295 16:58:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:16.295 ************************************ 00:07:16.295 START TEST driver 00:07:16.295 ************************************ 00:07:16.295 16:58:08 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:07:16.554 * Looking for test storage... 00:07:16.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:16.554 16:58:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:07:16.554 16:58:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:16.554 16:58:08 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:23.124 16:58:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:23.124 16:58:14 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.124 16:58:14 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.124 16:58:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:23.124 ************************************ 00:07:23.124 START TEST guess_driver 00:07:23.124 ************************************ 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:07:23.124 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:07:23.125 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:23.125 Looking for driver=uio_pci_generic 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:07:23.125 16:58:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:23.125 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:07:23.125 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:07:23.125 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:07:23.692 16:58:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:23.692 16:58:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:23.692 16:58:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:07:23.692 16:58:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:23.692 16:58:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.279 00:07:30.279 real 0m7.274s 00:07:30.279 user 0m0.820s 00:07:30.279 sys 0m1.548s 00:07:30.279 16:58:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.279 ************************************ 00:07:30.279 END TEST guess_driver 00:07:30.279 ************************************ 00:07:30.279 16:58:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 ************************************ 00:07:30.279 END TEST driver 00:07:30.279 ************************************ 00:07:30.279 00:07:30.279 real 0m13.407s 00:07:30.279 user 0m1.185s 00:07:30.279 sys 0m2.410s 00:07:30.279 16:58:22 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.279 16:58:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 16:58:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:30.279 16:58:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.279 16:58:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.279 16:58:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:30.279 ************************************ 00:07:30.279 START TEST devices 00:07:30.279 ************************************ 00:07:30.279 16:58:22 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:07:30.279 * Looking for test storage... 00:07:30.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:07:30.279 16:58:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:30.279 16:58:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:07:30.279 16:58:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:30.279 16:58:22 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:31.215 16:58:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:07:31.215 No valid GPT data, bailing 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:07:31.215 No valid GPT data, bailing 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:07:31.215 No valid GPT data, bailing 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.215 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.215 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:07:31.216 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:07:31.216 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:07:31.216 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:31.216 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:07:31.216 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:07:31.216 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:07:31.475 No valid GPT data, bailing 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:07:31.475 No valid GPT data, bailing 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:07:31.475 No valid GPT data, bailing 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:07:31.475 16:58:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:07:31.475 16:58:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:31.475 16:58:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:31.475 16:58:23 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.475 16:58:23 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.475 16:58:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 ************************************ 00:07:31.475 START TEST nvme_mount 00:07:31.475 ************************************ 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:31.475 16:58:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:32.850 Creating new GPT entries in memory. 00:07:32.850 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:32.850 other utilities. 00:07:32.850 16:58:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:32.850 16:58:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:32.850 16:58:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:32.850 16:58:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:32.850 16:58:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:33.785 Creating new GPT entries in memory. 00:07:33.785 The operation has completed successfully. 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59712 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:33.785 16:58:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:33.785 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:34.043 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.610 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:34.610 16:58:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:34.610 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:34.610 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:34.869 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.869 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.869 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:34.869 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:34.869 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:07:34.869 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:07:34.869 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:34.869 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:34.869 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.128 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.387 16:58:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.954 16:58:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:36.522 16:58:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:37.089 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:37.089 00:07:37.089 real 0m5.586s 00:07:37.089 user 0m1.542s 00:07:37.089 sys 0m1.709s 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.089 ************************************ 00:07:37.089 END TEST nvme_mount 00:07:37.089 ************************************ 00:07:37.089 16:58:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 16:58:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:37.089 16:58:29 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.089 16:58:29 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.089 16:58:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:37.354 ************************************ 00:07:37.354 START TEST dm_mount 00:07:37.354 ************************************ 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:07:37.354 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:37.355 16:58:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:38.312 Creating new GPT entries in memory. 00:07:38.312 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:38.312 other utilities. 00:07:38.312 16:58:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:38.312 16:58:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:38.312 16:58:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:38.312 16:58:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:38.312 16:58:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:07:39.255 Creating new GPT entries in memory. 00:07:39.255 The operation has completed successfully. 00:07:39.255 16:58:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:39.255 16:58:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:39.255 16:58:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:39.255 16:58:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:39.255 16:58:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:07:40.191 The operation has completed successfully. 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60352 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:40.191 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:40.449 16:58:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:40.708 16:58:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.708 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:40.708 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.966 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:40.966 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:40.966 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:40.966 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.224 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.224 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.483 16:58:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.741 16:58:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.741 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.741 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.741 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.742 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.742 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:41.742 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:42.309 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:07:42.309 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:42.309 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:42.309 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:42.309 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:42.310 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:42.310 16:58:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:42.568 00:07:42.568 real 0m5.219s 00:07:42.568 user 0m0.952s 00:07:42.568 sys 0m1.183s 00:07:42.568 16:58:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.568 ************************************ 00:07:42.568 END TEST dm_mount 00:07:42.568 ************************************ 00:07:42.568 16:58:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:42.568 16:58:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:42.826 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:42.826 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:42.826 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:42.826 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:42.826 16:58:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:42.826 ************************************ 00:07:42.826 END TEST devices 00:07:42.826 ************************************ 00:07:42.826 00:07:42.826 real 0m12.978s 00:07:42.826 user 0m3.485s 00:07:42.826 sys 0m3.761s 00:07:42.826 16:58:35 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.826 16:58:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:42.826 00:07:42.826 real 0m45.325s 00:07:42.826 user 0m10.784s 00:07:42.826 sys 0m14.112s 00:07:42.826 16:58:35 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.826 ************************************ 00:07:42.826 END TEST setup.sh 00:07:42.826 ************************************ 00:07:42.826 16:58:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:42.826 16:58:35 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:43.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.961 Hugepages 00:07:43.961 node hugesize free / total 00:07:43.961 node0 1048576kB 0 / 0 00:07:43.961 node0 2048kB 2048 / 2048 00:07:43.961 00:07:43.961 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:43.961 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:43.961 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:44.220 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:44.220 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:44.220 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:44.220 16:58:36 -- spdk/autotest.sh@130 -- # uname -s 00:07:44.220 16:58:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:44.220 16:58:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:44.220 16:58:36 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:44.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.356 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.356 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.356 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.615 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.615 16:58:37 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:46.550 16:58:38 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:46.550 16:58:38 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:46.550 16:58:38 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:46.550 16:58:38 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:46.550 16:58:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:46.550 16:58:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:46.550 16:58:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.550 16:58:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.550 16:58:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:46.550 16:58:38 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:07:46.550 16:58:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.550 16:58:38 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.148 Waiting for block devices as requested 00:07:47.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.406 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.406 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.406 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.675 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.675 16:58:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:52.675 16:58:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:52.675 16:58:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:52.675 16:58:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:52.675 16:58:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:52.675 16:58:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:52.675 16:58:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:52.675 16:58:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:52.675 16:58:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:52.675 16:58:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:52.675 16:58:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:52.675 16:58:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:52.675 16:58:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:52.675 16:58:44 -- common/autotest_common.sh@1557 -- # continue 00:07:52.675 16:58:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:52.675 16:58:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:52.675 16:58:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:52.675 16:58:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:52.675 16:58:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:52.676 16:58:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:52.676 16:58:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:52.676 16:58:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:52.676 16:58:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:52.676 16:58:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:52.676 16:58:44 -- common/autotest_common.sh@1557 -- # continue 00:07:52.676 16:58:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:52.676 16:58:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:52.676 16:58:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:52.676 16:58:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:07:52.676 16:58:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:52.676 16:58:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:52.676 16:58:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:52.676 16:58:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:52.676 16:58:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:52.676 16:58:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1557 -- # continue 00:07:52.676 16:58:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:52.676 16:58:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:52.676 16:58:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:07:52.676 16:58:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:52.676 16:58:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:52.676 16:58:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:52.676 16:58:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:52.676 16:58:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:52.676 16:58:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:52.676 16:58:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:52.676 16:58:45 -- common/autotest_common.sh@1557 -- # continue 00:07:52.676 16:58:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:52.676 16:58:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.676 16:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:52.676 16:58:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:52.676 16:58:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.676 16:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:52.676 16:58:45 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:53.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.810 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.810 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.068 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.068 16:58:46 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:54.068 16:58:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.068 16:58:46 -- common/autotest_common.sh@10 -- # set +x 00:07:54.068 16:58:46 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:54.068 16:58:46 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:54.068 16:58:46 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:54.068 16:58:46 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:54.068 16:58:46 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:54.068 16:58:46 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:54.068 16:58:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:54.068 16:58:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:54.068 16:58:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:54.068 16:58:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:54.068 16:58:46 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:54.068 16:58:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:54.068 16:58:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:54.068 16:58:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:54.068 16:58:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:54.068 16:58:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:54.068 16:58:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:54.068 16:58:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:54.068 16:58:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:54.068 16:58:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:54.068 16:58:46 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:54.068 16:58:46 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:54.068 16:58:46 -- common/autotest_common.sh@1593 -- # return 0 00:07:54.068 16:58:46 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:54.068 16:58:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:54.068 16:58:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:54.068 16:58:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:54.068 16:58:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:54.068 16:58:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:54.068 16:58:46 -- common/autotest_common.sh@10 -- # set +x 00:07:54.068 16:58:46 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:54.068 16:58:46 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:54.068 16:58:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.068 16:58:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.068 16:58:46 -- common/autotest_common.sh@10 -- # set +x 00:07:54.327 ************************************ 00:07:54.327 START TEST env 00:07:54.327 ************************************ 00:07:54.327 16:58:46 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:54.327 * Looking for test storage... 00:07:54.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:54.327 16:58:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:54.327 16:58:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.327 16:58:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.327 16:58:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:54.327 ************************************ 00:07:54.327 START TEST env_memory 00:07:54.327 ************************************ 00:07:54.327 16:58:46 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:54.327 00:07:54.327 00:07:54.327 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.327 http://cunit.sourceforge.net/ 00:07:54.327 00:07:54.327 00:07:54.327 Suite: memory 00:07:54.327 Test: alloc and free memory map ...[2024-07-25 16:58:46.722745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:54.327 passed 00:07:54.327 Test: mem map translation ...[2024-07-25 16:58:46.790069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:54.327 [2024-07-25 16:58:46.790228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:54.327 [2024-07-25 16:58:46.790384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:54.327 [2024-07-25 16:58:46.790475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:54.586 passed 00:07:54.586 Test: mem map registration ...[2024-07-25 16:58:46.939797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:54.586 [2024-07-25 16:58:46.939930] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:54.586 passed 00:07:54.844 Test: mem map adjacent registrations ...passed 00:07:54.844 00:07:54.844 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.844 suites 1 1 n/a 0 0 00:07:54.844 tests 4 4 4 0 0 00:07:54.844 asserts 152 152 152 0 n/a 00:07:54.844 00:07:54.844 Elapsed time = 0.400 seconds 00:07:54.844 00:07:54.844 real 0m0.448s 00:07:54.844 user 0m0.404s 00:07:54.844 sys 0m0.034s 00:07:54.844 16:58:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.844 16:58:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 ************************************ 00:07:54.844 END TEST env_memory 00:07:54.844 ************************************ 00:07:54.844 16:58:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:54.844 16:58:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.844 16:58:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.844 16:58:47 env -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 ************************************ 00:07:54.844 START TEST env_vtophys 00:07:54.844 ************************************ 00:07:54.844 16:58:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:54.844 EAL: lib.eal log level changed from notice to debug 00:07:54.844 EAL: Detected lcore 0 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 1 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 2 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 3 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 4 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 5 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 6 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 7 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 8 as core 0 on socket 0 00:07:54.844 EAL: Detected lcore 9 as core 0 on socket 0 00:07:54.844 EAL: Maximum logical cores by configuration: 128 00:07:54.844 EAL: Detected CPU lcores: 10 00:07:54.844 EAL: Detected NUMA nodes: 1 00:07:54.844 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:54.844 EAL: Detected shared linkage of DPDK 00:07:54.844 EAL: No shared files mode enabled, IPC will be disabled 00:07:54.844 EAL: Selected IOVA mode 'PA' 00:07:54.844 EAL: Probing VFIO support... 00:07:54.844 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:54.844 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:54.844 EAL: Ask a virtual area of 0x2e000 bytes 00:07:54.844 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:54.844 EAL: Setting up physically contiguous memory... 00:07:54.844 EAL: Setting maximum number of open files to 524288 00:07:54.844 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:54.844 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:54.844 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.844 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:54.844 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.844 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.844 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:54.844 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:54.844 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.844 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:54.844 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.844 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.844 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:54.844 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:54.844 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.844 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:54.844 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.844 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.844 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:54.844 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:54.844 EAL: Ask a virtual area of 0x61000 bytes 00:07:54.844 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:54.844 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:54.844 EAL: Ask a virtual area of 0x400000000 bytes 00:07:54.844 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:54.844 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:54.845 EAL: Hugepages will be freed exactly as allocated. 00:07:54.845 EAL: No shared files mode enabled, IPC is disabled 00:07:54.845 EAL: No shared files mode enabled, IPC is disabled 00:07:55.103 EAL: TSC frequency is ~2200000 KHz 00:07:55.103 EAL: Main lcore 0 is ready (tid=7f3d07cc8a40;cpuset=[0]) 00:07:55.103 EAL: Trying to obtain current memory policy. 00:07:55.103 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.103 EAL: Restoring previous memory policy: 0 00:07:55.103 EAL: request: mp_malloc_sync 00:07:55.103 EAL: No shared files mode enabled, IPC is disabled 00:07:55.103 EAL: Heap on socket 0 was expanded by 2MB 00:07:55.103 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:55.103 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:55.103 EAL: Mem event callback 'spdk:(nil)' registered 00:07:55.103 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:55.103 00:07:55.103 00:07:55.103 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.103 http://cunit.sourceforge.net/ 00:07:55.103 00:07:55.103 00:07:55.103 Suite: components_suite 00:07:55.670 Test: vtophys_malloc_test ...passed 00:07:55.670 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.670 EAL: Restoring previous memory policy: 4 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was expanded by 4MB 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was shrunk by 4MB 00:07:55.670 EAL: Trying to obtain current memory policy. 00:07:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.670 EAL: Restoring previous memory policy: 4 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was expanded by 6MB 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was shrunk by 6MB 00:07:55.670 EAL: Trying to obtain current memory policy. 00:07:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.670 EAL: Restoring previous memory policy: 4 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was expanded by 10MB 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was shrunk by 10MB 00:07:55.670 EAL: Trying to obtain current memory policy. 00:07:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.670 EAL: Restoring previous memory policy: 4 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was expanded by 18MB 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was shrunk by 18MB 00:07:55.670 EAL: Trying to obtain current memory policy. 00:07:55.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.670 EAL: Restoring previous memory policy: 4 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.670 EAL: Heap on socket 0 was expanded by 34MB 00:07:55.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.670 EAL: request: mp_malloc_sync 00:07:55.670 EAL: No shared files mode enabled, IPC is disabled 00:07:55.671 EAL: Heap on socket 0 was shrunk by 34MB 00:07:55.671 EAL: Trying to obtain current memory policy. 00:07:55.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.671 EAL: Restoring previous memory policy: 4 00:07:55.671 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.671 EAL: request: mp_malloc_sync 00:07:55.671 EAL: No shared files mode enabled, IPC is disabled 00:07:55.671 EAL: Heap on socket 0 was expanded by 66MB 00:07:55.931 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.931 EAL: request: mp_malloc_sync 00:07:55.931 EAL: No shared files mode enabled, IPC is disabled 00:07:55.931 EAL: Heap on socket 0 was shrunk by 66MB 00:07:55.931 EAL: Trying to obtain current memory policy. 00:07:55.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:55.931 EAL: Restoring previous memory policy: 4 00:07:55.931 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.931 EAL: request: mp_malloc_sync 00:07:55.931 EAL: No shared files mode enabled, IPC is disabled 00:07:55.931 EAL: Heap on socket 0 was expanded by 130MB 00:07:56.189 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.189 EAL: request: mp_malloc_sync 00:07:56.189 EAL: No shared files mode enabled, IPC is disabled 00:07:56.189 EAL: Heap on socket 0 was shrunk by 130MB 00:07:56.447 EAL: Trying to obtain current memory policy. 00:07:56.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:56.447 EAL: Restoring previous memory policy: 4 00:07:56.447 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.447 EAL: request: mp_malloc_sync 00:07:56.447 EAL: No shared files mode enabled, IPC is disabled 00:07:56.447 EAL: Heap on socket 0 was expanded by 258MB 00:07:57.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.014 EAL: request: mp_malloc_sync 00:07:57.014 EAL: No shared files mode enabled, IPC is disabled 00:07:57.014 EAL: Heap on socket 0 was shrunk by 258MB 00:07:57.273 EAL: Trying to obtain current memory policy. 00:07:57.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.532 EAL: Restoring previous memory policy: 4 00:07:57.532 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.532 EAL: request: mp_malloc_sync 00:07:57.532 EAL: No shared files mode enabled, IPC is disabled 00:07:57.532 EAL: Heap on socket 0 was expanded by 514MB 00:07:58.466 EAL: Calling mem event callback 'spdk:(nil)' 00:07:58.466 EAL: request: mp_malloc_sync 00:07:58.466 EAL: No shared files mode enabled, IPC is disabled 00:07:58.466 EAL: Heap on socket 0 was shrunk by 514MB 00:07:59.032 EAL: Trying to obtain current memory policy. 00:07:59.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:59.291 EAL: Restoring previous memory policy: 4 00:07:59.291 EAL: Calling mem event callback 'spdk:(nil)' 00:07:59.291 EAL: request: mp_malloc_sync 00:07:59.291 EAL: No shared files mode enabled, IPC is disabled 00:07:59.291 EAL: Heap on socket 0 was expanded by 1026MB 00:08:01.247 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.247 EAL: request: mp_malloc_sync 00:08:01.247 EAL: No shared files mode enabled, IPC is disabled 00:08:01.247 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:02.622 passed 00:08:02.622 00:08:02.622 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.622 suites 1 1 n/a 0 0 00:08:02.622 tests 2 2 2 0 0 00:08:02.622 asserts 5334 5334 5334 0 n/a 00:08:02.622 00:08:02.622 Elapsed time = 7.549 seconds 00:08:02.622 EAL: Calling mem event callback 'spdk:(nil)' 00:08:02.622 EAL: request: mp_malloc_sync 00:08:02.622 EAL: No shared files mode enabled, IPC is disabled 00:08:02.622 EAL: Heap on socket 0 was shrunk by 2MB 00:08:02.622 EAL: No shared files mode enabled, IPC is disabled 00:08:02.622 EAL: No shared files mode enabled, IPC is disabled 00:08:02.622 EAL: No shared files mode enabled, IPC is disabled 00:08:02.622 00:08:02.622 real 0m7.875s 00:08:02.622 user 0m6.708s 00:08:02.622 sys 0m0.998s 00:08:02.622 16:58:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.622 16:58:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:02.622 ************************************ 00:08:02.622 END TEST env_vtophys 00:08:02.622 ************************************ 00:08:02.622 16:58:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:02.622 16:58:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.622 16:58:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.622 16:58:55 env -- common/autotest_common.sh@10 -- # set +x 00:08:02.622 ************************************ 00:08:02.622 START TEST env_pci 00:08:02.622 ************************************ 00:08:02.622 16:58:55 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:02.622 00:08:02.622 00:08:02.622 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.622 http://cunit.sourceforge.net/ 00:08:02.622 00:08:02.622 00:08:02.622 Suite: pci 00:08:02.881 Test: pci_hook ...[2024-07-25 16:58:55.092103] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62193 has claimed it 00:08:02.881 passed 00:08:02.881 00:08:02.881 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.881 suites 1 1 n/a 0 0 00:08:02.881 tests 1 1 1 0 0 00:08:02.881 asserts 25 25 25 0 n/a 00:08:02.881 00:08:02.881 Elapsed time = 0.007 seconds 00:08:02.881 EAL: Cannot find device (10000:00:01.0) 00:08:02.881 EAL: Failed to attach device on primary process 00:08:02.881 00:08:02.881 real 0m0.081s 00:08:02.881 user 0m0.039s 00:08:02.881 sys 0m0.041s 00:08:02.881 16:58:55 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.881 ************************************ 00:08:02.881 END TEST env_pci 00:08:02.881 ************************************ 00:08:02.881 16:58:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:02.881 16:58:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:02.881 16:58:55 env -- env/env.sh@15 -- # uname 00:08:02.881 16:58:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:02.881 16:58:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:02.881 16:58:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:02.881 16:58:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.881 16:58:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.881 16:58:55 env -- common/autotest_common.sh@10 -- # set +x 00:08:02.881 ************************************ 00:08:02.881 START TEST env_dpdk_post_init 00:08:02.881 ************************************ 00:08:02.881 16:58:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:02.881 EAL: Detected CPU lcores: 10 00:08:02.881 EAL: Detected NUMA nodes: 1 00:08:02.881 EAL: Detected shared linkage of DPDK 00:08:02.881 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:02.881 EAL: Selected IOVA mode 'PA' 00:08:03.139 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:03.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:03.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:03.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:03.139 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:03.139 Starting DPDK initialization... 00:08:03.139 Starting SPDK post initialization... 00:08:03.139 SPDK NVMe probe 00:08:03.139 Attaching to 0000:00:10.0 00:08:03.139 Attaching to 0000:00:11.0 00:08:03.139 Attaching to 0000:00:12.0 00:08:03.139 Attaching to 0000:00:13.0 00:08:03.139 Attached to 0000:00:10.0 00:08:03.139 Attached to 0000:00:11.0 00:08:03.139 Attached to 0000:00:13.0 00:08:03.139 Attached to 0000:00:12.0 00:08:03.139 Cleaning up... 00:08:03.139 00:08:03.139 real 0m0.305s 00:08:03.139 user 0m0.101s 00:08:03.139 sys 0m0.104s 00:08:03.139 16:58:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.139 16:58:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:03.139 ************************************ 00:08:03.139 END TEST env_dpdk_post_init 00:08:03.139 ************************************ 00:08:03.139 16:58:55 env -- env/env.sh@26 -- # uname 00:08:03.139 16:58:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:03.139 16:58:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:03.139 16:58:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.139 16:58:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.139 16:58:55 env -- common/autotest_common.sh@10 -- # set +x 00:08:03.139 ************************************ 00:08:03.139 START TEST env_mem_callbacks 00:08:03.139 ************************************ 00:08:03.139 16:58:55 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:03.139 EAL: Detected CPU lcores: 10 00:08:03.139 EAL: Detected NUMA nodes: 1 00:08:03.139 EAL: Detected shared linkage of DPDK 00:08:03.398 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:03.398 EAL: Selected IOVA mode 'PA' 00:08:03.398 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:03.398 00:08:03.398 00:08:03.398 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.398 http://cunit.sourceforge.net/ 00:08:03.398 00:08:03.398 00:08:03.398 Suite: memory 00:08:03.398 Test: test ... 00:08:03.398 register 0x200000200000 2097152 00:08:03.398 malloc 3145728 00:08:03.398 register 0x200000400000 4194304 00:08:03.398 buf 0x2000004fffc0 len 3145728 PASSED 00:08:03.398 malloc 64 00:08:03.398 buf 0x2000004ffec0 len 64 PASSED 00:08:03.398 malloc 4194304 00:08:03.398 register 0x200000800000 6291456 00:08:03.398 buf 0x2000009fffc0 len 4194304 PASSED 00:08:03.398 free 0x2000004fffc0 3145728 00:08:03.398 free 0x2000004ffec0 64 00:08:03.398 unregister 0x200000400000 4194304 PASSED 00:08:03.398 free 0x2000009fffc0 4194304 00:08:03.398 unregister 0x200000800000 6291456 PASSED 00:08:03.398 malloc 8388608 00:08:03.398 register 0x200000400000 10485760 00:08:03.398 buf 0x2000005fffc0 len 8388608 PASSED 00:08:03.398 free 0x2000005fffc0 8388608 00:08:03.398 unregister 0x200000400000 10485760 PASSED 00:08:03.398 passed 00:08:03.398 00:08:03.398 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.398 suites 1 1 n/a 0 0 00:08:03.398 tests 1 1 1 0 0 00:08:03.398 asserts 15 15 15 0 n/a 00:08:03.398 00:08:03.398 Elapsed time = 0.090 seconds 00:08:03.398 00:08:03.398 real 0m0.301s 00:08:03.398 user 0m0.122s 00:08:03.398 sys 0m0.077s 00:08:03.398 16:58:55 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.398 16:58:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:03.398 ************************************ 00:08:03.398 END TEST env_mem_callbacks 00:08:03.398 ************************************ 00:08:03.656 00:08:03.656 real 0m9.351s 00:08:03.656 user 0m7.499s 00:08:03.656 sys 0m1.462s 00:08:03.656 16:58:55 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.656 16:58:55 env -- common/autotest_common.sh@10 -- # set +x 00:08:03.656 ************************************ 00:08:03.656 END TEST env 00:08:03.656 ************************************ 00:08:03.656 16:58:55 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:03.656 16:58:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.656 16:58:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.656 16:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:03.656 ************************************ 00:08:03.656 START TEST rpc 00:08:03.656 ************************************ 00:08:03.656 16:58:55 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:03.656 * Looking for test storage... 00:08:03.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:03.656 16:58:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62306 00:08:03.656 16:58:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.656 16:58:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62306 00:08:03.656 16:58:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 62306 ']' 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.656 16:58:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.914 [2024-07-25 16:58:56.158598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:03.914 [2024-07-25 16:58:56.158788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62306 ] 00:08:03.914 [2024-07-25 16:58:56.334357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.173 [2024-07-25 16:58:56.609379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:04.173 [2024-07-25 16:58:56.609461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62306' to capture a snapshot of events at runtime. 00:08:04.173 [2024-07-25 16:58:56.609481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.173 [2024-07-25 16:58:56.609494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.173 [2024-07-25 16:58:56.609509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62306 for offline analysis/debug. 00:08:04.173 [2024-07-25 16:58:56.609554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.107 16:58:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.107 16:58:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:05.107 16:58:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:05.107 16:58:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:05.107 16:58:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:05.107 16:58:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:05.107 16:58:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.107 16:58:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.107 16:58:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.107 ************************************ 00:08:05.107 START TEST rpc_integrity 00:08:05.107 ************************************ 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.107 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.107 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:05.107 { 00:08:05.107 "name": "Malloc0", 00:08:05.107 "aliases": [ 00:08:05.107 "351c970f-90f8-48d5-bf6c-e3de99031c7f" 00:08:05.107 ], 00:08:05.107 "product_name": "Malloc disk", 00:08:05.107 "block_size": 512, 00:08:05.107 "num_blocks": 16384, 00:08:05.107 "uuid": "351c970f-90f8-48d5-bf6c-e3de99031c7f", 00:08:05.107 "assigned_rate_limits": { 00:08:05.107 "rw_ios_per_sec": 0, 00:08:05.107 "rw_mbytes_per_sec": 0, 00:08:05.107 "r_mbytes_per_sec": 0, 00:08:05.107 "w_mbytes_per_sec": 0 00:08:05.107 }, 00:08:05.107 "claimed": false, 00:08:05.107 "zoned": false, 00:08:05.107 "supported_io_types": { 00:08:05.107 "read": true, 00:08:05.107 "write": true, 00:08:05.108 "unmap": true, 00:08:05.108 "flush": true, 00:08:05.108 "reset": true, 00:08:05.108 "nvme_admin": false, 00:08:05.108 "nvme_io": false, 00:08:05.108 "nvme_io_md": false, 00:08:05.108 "write_zeroes": true, 00:08:05.108 "zcopy": true, 00:08:05.108 "get_zone_info": false, 00:08:05.108 "zone_management": false, 00:08:05.108 "zone_append": false, 00:08:05.108 "compare": false, 00:08:05.108 "compare_and_write": false, 00:08:05.108 "abort": true, 00:08:05.108 "seek_hole": false, 00:08:05.108 "seek_data": false, 00:08:05.108 "copy": true, 00:08:05.108 "nvme_iov_md": false 00:08:05.108 }, 00:08:05.108 "memory_domains": [ 00:08:05.108 { 00:08:05.108 "dma_device_id": "system", 00:08:05.108 "dma_device_type": 1 00:08:05.108 }, 00:08:05.108 { 00:08:05.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.108 "dma_device_type": 2 00:08:05.108 } 00:08:05.108 ], 00:08:05.108 "driver_specific": {} 00:08:05.108 } 00:08:05.108 ]' 00:08:05.108 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:05.365 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:05.365 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.365 [2024-07-25 16:58:57.604006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:05.365 [2024-07-25 16:58:57.604105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.365 [2024-07-25 16:58:57.604154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.365 [2024-07-25 16:58:57.604171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.365 [2024-07-25 16:58:57.607099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.365 [2024-07-25 16:58:57.607148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:05.365 Passthru0 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.365 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.365 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.365 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:05.365 { 00:08:05.365 "name": "Malloc0", 00:08:05.365 "aliases": [ 00:08:05.365 "351c970f-90f8-48d5-bf6c-e3de99031c7f" 00:08:05.365 ], 00:08:05.365 "product_name": "Malloc disk", 00:08:05.365 "block_size": 512, 00:08:05.365 "num_blocks": 16384, 00:08:05.365 "uuid": "351c970f-90f8-48d5-bf6c-e3de99031c7f", 00:08:05.365 "assigned_rate_limits": { 00:08:05.365 "rw_ios_per_sec": 0, 00:08:05.365 "rw_mbytes_per_sec": 0, 00:08:05.365 "r_mbytes_per_sec": 0, 00:08:05.365 "w_mbytes_per_sec": 0 00:08:05.365 }, 00:08:05.365 "claimed": true, 00:08:05.365 "claim_type": "exclusive_write", 00:08:05.365 "zoned": false, 00:08:05.365 "supported_io_types": { 00:08:05.365 "read": true, 00:08:05.365 "write": true, 00:08:05.365 "unmap": true, 00:08:05.365 "flush": true, 00:08:05.365 "reset": true, 00:08:05.365 "nvme_admin": false, 00:08:05.365 "nvme_io": false, 00:08:05.365 "nvme_io_md": false, 00:08:05.365 "write_zeroes": true, 00:08:05.365 "zcopy": true, 00:08:05.365 "get_zone_info": false, 00:08:05.365 "zone_management": false, 00:08:05.365 "zone_append": false, 00:08:05.365 "compare": false, 00:08:05.365 "compare_and_write": false, 00:08:05.365 "abort": true, 00:08:05.365 "seek_hole": false, 00:08:05.365 "seek_data": false, 00:08:05.366 "copy": true, 00:08:05.366 "nvme_iov_md": false 00:08:05.366 }, 00:08:05.366 "memory_domains": [ 00:08:05.366 { 00:08:05.366 "dma_device_id": "system", 00:08:05.366 "dma_device_type": 1 00:08:05.366 }, 00:08:05.366 { 00:08:05.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.366 "dma_device_type": 2 00:08:05.366 } 00:08:05.366 ], 00:08:05.366 "driver_specific": {} 00:08:05.366 }, 00:08:05.366 { 00:08:05.366 "name": "Passthru0", 00:08:05.366 "aliases": [ 00:08:05.366 "ed17ffdf-d0ac-5b6e-bca7-0e175af3c086" 00:08:05.366 ], 00:08:05.366 "product_name": "passthru", 00:08:05.366 "block_size": 512, 00:08:05.366 "num_blocks": 16384, 00:08:05.366 "uuid": "ed17ffdf-d0ac-5b6e-bca7-0e175af3c086", 00:08:05.366 "assigned_rate_limits": { 00:08:05.366 "rw_ios_per_sec": 0, 00:08:05.366 "rw_mbytes_per_sec": 0, 00:08:05.366 "r_mbytes_per_sec": 0, 00:08:05.366 "w_mbytes_per_sec": 0 00:08:05.366 }, 00:08:05.366 "claimed": false, 00:08:05.366 "zoned": false, 00:08:05.366 "supported_io_types": { 00:08:05.366 "read": true, 00:08:05.366 "write": true, 00:08:05.366 "unmap": true, 00:08:05.366 "flush": true, 00:08:05.366 "reset": true, 00:08:05.366 "nvme_admin": false, 00:08:05.366 "nvme_io": false, 00:08:05.366 "nvme_io_md": false, 00:08:05.366 "write_zeroes": true, 00:08:05.366 "zcopy": true, 00:08:05.366 "get_zone_info": false, 00:08:05.366 "zone_management": false, 00:08:05.366 "zone_append": false, 00:08:05.366 "compare": false, 00:08:05.366 "compare_and_write": false, 00:08:05.366 "abort": true, 00:08:05.366 "seek_hole": false, 00:08:05.366 "seek_data": false, 00:08:05.366 "copy": true, 00:08:05.366 "nvme_iov_md": false 00:08:05.366 }, 00:08:05.366 "memory_domains": [ 00:08:05.366 { 00:08:05.366 "dma_device_id": "system", 00:08:05.366 "dma_device_type": 1 00:08:05.366 }, 00:08:05.366 { 00:08:05.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.366 "dma_device_type": 2 00:08:05.366 } 00:08:05.366 ], 00:08:05.366 "driver_specific": { 00:08:05.366 "passthru": { 00:08:05.366 "name": "Passthru0", 00:08:05.366 "base_bdev_name": "Malloc0" 00:08:05.366 } 00:08:05.366 } 00:08:05.366 } 00:08:05.366 ]' 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:05.366 16:58:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:05.366 00:08:05.366 real 0m0.356s 00:08:05.366 user 0m0.206s 00:08:05.366 sys 0m0.055s 00:08:05.366 ************************************ 00:08:05.366 END TEST rpc_integrity 00:08:05.366 ************************************ 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.366 16:58:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.366 16:58:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:05.366 16:58:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.366 16:58:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.366 16:58:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 ************************************ 00:08:05.683 START TEST rpc_plugins 00:08:05.683 ************************************ 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:05.683 { 00:08:05.683 "name": "Malloc1", 00:08:05.683 "aliases": [ 00:08:05.683 "063d532f-a754-4fc8-a939-e94703813e12" 00:08:05.683 ], 00:08:05.683 "product_name": "Malloc disk", 00:08:05.683 "block_size": 4096, 00:08:05.683 "num_blocks": 256, 00:08:05.683 "uuid": "063d532f-a754-4fc8-a939-e94703813e12", 00:08:05.683 "assigned_rate_limits": { 00:08:05.683 "rw_ios_per_sec": 0, 00:08:05.683 "rw_mbytes_per_sec": 0, 00:08:05.683 "r_mbytes_per_sec": 0, 00:08:05.683 "w_mbytes_per_sec": 0 00:08:05.683 }, 00:08:05.683 "claimed": false, 00:08:05.683 "zoned": false, 00:08:05.683 "supported_io_types": { 00:08:05.683 "read": true, 00:08:05.683 "write": true, 00:08:05.683 "unmap": true, 00:08:05.683 "flush": true, 00:08:05.683 "reset": true, 00:08:05.683 "nvme_admin": false, 00:08:05.683 "nvme_io": false, 00:08:05.683 "nvme_io_md": false, 00:08:05.683 "write_zeroes": true, 00:08:05.683 "zcopy": true, 00:08:05.683 "get_zone_info": false, 00:08:05.683 "zone_management": false, 00:08:05.683 "zone_append": false, 00:08:05.683 "compare": false, 00:08:05.683 "compare_and_write": false, 00:08:05.683 "abort": true, 00:08:05.683 "seek_hole": false, 00:08:05.683 "seek_data": false, 00:08:05.683 "copy": true, 00:08:05.683 "nvme_iov_md": false 00:08:05.683 }, 00:08:05.683 "memory_domains": [ 00:08:05.683 { 00:08:05.683 "dma_device_id": "system", 00:08:05.683 "dma_device_type": 1 00:08:05.683 }, 00:08:05.683 { 00:08:05.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.683 "dma_device_type": 2 00:08:05.683 } 00:08:05.683 ], 00:08:05.683 "driver_specific": {} 00:08:05.683 } 00:08:05.683 ]' 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:05.683 16:58:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:05.683 16:58:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:05.683 00:08:05.683 real 0m0.170s 00:08:05.683 user 0m0.107s 00:08:05.683 sys 0m0.022s 00:08:05.683 ************************************ 00:08:05.683 END TEST rpc_plugins 00:08:05.683 ************************************ 00:08:05.683 16:58:58 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.683 16:58:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:05.683 16:58:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.683 16:58:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.683 16:58:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 ************************************ 00:08:05.683 START TEST rpc_trace_cmd_test 00:08:05.683 ************************************ 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.683 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:05.683 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62306", 00:08:05.683 "tpoint_group_mask": "0x8", 00:08:05.683 "iscsi_conn": { 00:08:05.683 "mask": "0x2", 00:08:05.683 "tpoint_mask": "0x0" 00:08:05.683 }, 00:08:05.683 "scsi": { 00:08:05.683 "mask": "0x4", 00:08:05.683 "tpoint_mask": "0x0" 00:08:05.683 }, 00:08:05.683 "bdev": { 00:08:05.683 "mask": "0x8", 00:08:05.683 "tpoint_mask": "0xffffffffffffffff" 00:08:05.683 }, 00:08:05.683 "nvmf_rdma": { 00:08:05.683 "mask": "0x10", 00:08:05.683 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "nvmf_tcp": { 00:08:05.684 "mask": "0x20", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "ftl": { 00:08:05.684 "mask": "0x40", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "blobfs": { 00:08:05.684 "mask": "0x80", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "dsa": { 00:08:05.684 "mask": "0x200", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "thread": { 00:08:05.684 "mask": "0x400", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "nvme_pcie": { 00:08:05.684 "mask": "0x800", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "iaa": { 00:08:05.684 "mask": "0x1000", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "nvme_tcp": { 00:08:05.684 "mask": "0x2000", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "bdev_nvme": { 00:08:05.684 "mask": "0x4000", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 }, 00:08:05.684 "sock": { 00:08:05.684 "mask": "0x8000", 00:08:05.684 "tpoint_mask": "0x0" 00:08:05.684 } 00:08:05.684 }' 00:08:05.684 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:05.684 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:05.684 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:05.942 00:08:05.942 real 0m0.267s 00:08:05.942 user 0m0.236s 00:08:05.942 sys 0m0.022s 00:08:05.942 ************************************ 00:08:05.942 END TEST rpc_trace_cmd_test 00:08:05.942 ************************************ 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.942 16:58:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.942 16:58:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:05.942 16:58:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:05.942 16:58:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:05.942 16:58:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.942 16:58:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.942 16:58:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.942 ************************************ 00:08:05.942 START TEST rpc_daemon_integrity 00:08:05.942 ************************************ 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:05.942 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:06.200 { 00:08:06.200 "name": "Malloc2", 00:08:06.200 "aliases": [ 00:08:06.200 "8171dc9d-2b87-44cc-a960-578cecd2f670" 00:08:06.200 ], 00:08:06.200 "product_name": "Malloc disk", 00:08:06.200 "block_size": 512, 00:08:06.200 "num_blocks": 16384, 00:08:06.200 "uuid": "8171dc9d-2b87-44cc-a960-578cecd2f670", 00:08:06.200 "assigned_rate_limits": { 00:08:06.200 "rw_ios_per_sec": 0, 00:08:06.200 "rw_mbytes_per_sec": 0, 00:08:06.200 "r_mbytes_per_sec": 0, 00:08:06.200 "w_mbytes_per_sec": 0 00:08:06.200 }, 00:08:06.200 "claimed": false, 00:08:06.200 "zoned": false, 00:08:06.200 "supported_io_types": { 00:08:06.200 "read": true, 00:08:06.200 "write": true, 00:08:06.200 "unmap": true, 00:08:06.200 "flush": true, 00:08:06.200 "reset": true, 00:08:06.200 "nvme_admin": false, 00:08:06.200 "nvme_io": false, 00:08:06.200 "nvme_io_md": false, 00:08:06.200 "write_zeroes": true, 00:08:06.200 "zcopy": true, 00:08:06.200 "get_zone_info": false, 00:08:06.200 "zone_management": false, 00:08:06.200 "zone_append": false, 00:08:06.200 "compare": false, 00:08:06.200 "compare_and_write": false, 00:08:06.200 "abort": true, 00:08:06.200 "seek_hole": false, 00:08:06.200 "seek_data": false, 00:08:06.200 "copy": true, 00:08:06.200 "nvme_iov_md": false 00:08:06.200 }, 00:08:06.200 "memory_domains": [ 00:08:06.200 { 00:08:06.200 "dma_device_id": "system", 00:08:06.200 "dma_device_type": 1 00:08:06.200 }, 00:08:06.200 { 00:08:06.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.200 "dma_device_type": 2 00:08:06.200 } 00:08:06.200 ], 00:08:06.200 "driver_specific": {} 00:08:06.200 } 00:08:06.200 ]' 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.200 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.200 [2024-07-25 16:58:58.550495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:06.200 [2024-07-25 16:58:58.550580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.200 [2024-07-25 16:58:58.550620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:06.200 [2024-07-25 16:58:58.550636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.201 [2024-07-25 16:58:58.553730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.201 [2024-07-25 16:58:58.553778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:06.201 Passthru0 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:06.201 { 00:08:06.201 "name": "Malloc2", 00:08:06.201 "aliases": [ 00:08:06.201 "8171dc9d-2b87-44cc-a960-578cecd2f670" 00:08:06.201 ], 00:08:06.201 "product_name": "Malloc disk", 00:08:06.201 "block_size": 512, 00:08:06.201 "num_blocks": 16384, 00:08:06.201 "uuid": "8171dc9d-2b87-44cc-a960-578cecd2f670", 00:08:06.201 "assigned_rate_limits": { 00:08:06.201 "rw_ios_per_sec": 0, 00:08:06.201 "rw_mbytes_per_sec": 0, 00:08:06.201 "r_mbytes_per_sec": 0, 00:08:06.201 "w_mbytes_per_sec": 0 00:08:06.201 }, 00:08:06.201 "claimed": true, 00:08:06.201 "claim_type": "exclusive_write", 00:08:06.201 "zoned": false, 00:08:06.201 "supported_io_types": { 00:08:06.201 "read": true, 00:08:06.201 "write": true, 00:08:06.201 "unmap": true, 00:08:06.201 "flush": true, 00:08:06.201 "reset": true, 00:08:06.201 "nvme_admin": false, 00:08:06.201 "nvme_io": false, 00:08:06.201 "nvme_io_md": false, 00:08:06.201 "write_zeroes": true, 00:08:06.201 "zcopy": true, 00:08:06.201 "get_zone_info": false, 00:08:06.201 "zone_management": false, 00:08:06.201 "zone_append": false, 00:08:06.201 "compare": false, 00:08:06.201 "compare_and_write": false, 00:08:06.201 "abort": true, 00:08:06.201 "seek_hole": false, 00:08:06.201 "seek_data": false, 00:08:06.201 "copy": true, 00:08:06.201 "nvme_iov_md": false 00:08:06.201 }, 00:08:06.201 "memory_domains": [ 00:08:06.201 { 00:08:06.201 "dma_device_id": "system", 00:08:06.201 "dma_device_type": 1 00:08:06.201 }, 00:08:06.201 { 00:08:06.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.201 "dma_device_type": 2 00:08:06.201 } 00:08:06.201 ], 00:08:06.201 "driver_specific": {} 00:08:06.201 }, 00:08:06.201 { 00:08:06.201 "name": "Passthru0", 00:08:06.201 "aliases": [ 00:08:06.201 "542a7147-aa81-5eb0-bf20-53ecc5424806" 00:08:06.201 ], 00:08:06.201 "product_name": "passthru", 00:08:06.201 "block_size": 512, 00:08:06.201 "num_blocks": 16384, 00:08:06.201 "uuid": "542a7147-aa81-5eb0-bf20-53ecc5424806", 00:08:06.201 "assigned_rate_limits": { 00:08:06.201 "rw_ios_per_sec": 0, 00:08:06.201 "rw_mbytes_per_sec": 0, 00:08:06.201 "r_mbytes_per_sec": 0, 00:08:06.201 "w_mbytes_per_sec": 0 00:08:06.201 }, 00:08:06.201 "claimed": false, 00:08:06.201 "zoned": false, 00:08:06.201 "supported_io_types": { 00:08:06.201 "read": true, 00:08:06.201 "write": true, 00:08:06.201 "unmap": true, 00:08:06.201 "flush": true, 00:08:06.201 "reset": true, 00:08:06.201 "nvme_admin": false, 00:08:06.201 "nvme_io": false, 00:08:06.201 "nvme_io_md": false, 00:08:06.201 "write_zeroes": true, 00:08:06.201 "zcopy": true, 00:08:06.201 "get_zone_info": false, 00:08:06.201 "zone_management": false, 00:08:06.201 "zone_append": false, 00:08:06.201 "compare": false, 00:08:06.201 "compare_and_write": false, 00:08:06.201 "abort": true, 00:08:06.201 "seek_hole": false, 00:08:06.201 "seek_data": false, 00:08:06.201 "copy": true, 00:08:06.201 "nvme_iov_md": false 00:08:06.201 }, 00:08:06.201 "memory_domains": [ 00:08:06.201 { 00:08:06.201 "dma_device_id": "system", 00:08:06.201 "dma_device_type": 1 00:08:06.201 }, 00:08:06.201 { 00:08:06.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.201 "dma_device_type": 2 00:08:06.201 } 00:08:06.201 ], 00:08:06.201 "driver_specific": { 00:08:06.201 "passthru": { 00:08:06.201 "name": "Passthru0", 00:08:06.201 "base_bdev_name": "Malloc2" 00:08:06.201 } 00:08:06.201 } 00:08:06.201 } 00:08:06.201 ]' 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.201 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:06.459 00:08:06.459 real 0m0.356s 00:08:06.459 user 0m0.215s 00:08:06.459 sys 0m0.053s 00:08:06.459 ************************************ 00:08:06.459 END TEST rpc_daemon_integrity 00:08:06.459 ************************************ 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.459 16:58:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:06.459 16:58:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:06.459 16:58:58 rpc -- rpc/rpc.sh@84 -- # killprocess 62306 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@950 -- # '[' -z 62306 ']' 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@954 -- # kill -0 62306 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@955 -- # uname 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62306 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.459 killing process with pid 62306 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62306' 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@969 -- # kill 62306 00:08:06.459 16:58:58 rpc -- common/autotest_common.sh@974 -- # wait 62306 00:08:08.990 00:08:08.990 real 0m5.226s 00:08:08.990 user 0m5.852s 00:08:08.990 sys 0m0.891s 00:08:08.990 16:59:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.990 ************************************ 00:08:08.990 END TEST rpc 00:08:08.990 ************************************ 00:08:08.990 16:59:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.990 16:59:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:08.990 16:59:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.990 16:59:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.990 16:59:01 -- common/autotest_common.sh@10 -- # set +x 00:08:08.990 ************************************ 00:08:08.990 START TEST skip_rpc 00:08:08.990 ************************************ 00:08:08.990 16:59:01 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:08.990 * Looking for test storage... 00:08:08.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:08.990 16:59:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:08.990 16:59:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:08.990 16:59:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:08.990 16:59:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.990 16:59:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.990 16:59:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.990 ************************************ 00:08:08.990 START TEST skip_rpc 00:08:08.990 ************************************ 00:08:08.990 16:59:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:08.990 16:59:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62533 00:08:08.990 16:59:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.990 16:59:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:08.990 16:59:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:09.248 [2024-07-25 16:59:01.486632] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:09.248 [2024-07-25 16:59:01.486843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62533 ] 00:08:09.248 [2024-07-25 16:59:01.664101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.506 [2024-07-25 16:59:01.941716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62533 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62533 ']' 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62533 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62533 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.881 killing process with pid 62533 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62533' 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62533 00:08:14.881 16:59:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62533 00:08:16.286 00:08:16.286 real 0m7.264s 00:08:16.286 user 0m6.605s 00:08:16.286 sys 0m0.546s 00:08:16.286 16:59:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.286 16:59:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.286 ************************************ 00:08:16.286 END TEST skip_rpc 00:08:16.286 ************************************ 00:08:16.286 16:59:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:16.286 16:59:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.286 16:59:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.286 16:59:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.286 ************************************ 00:08:16.286 START TEST skip_rpc_with_json 00:08:16.286 ************************************ 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62637 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62637 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62637 ']' 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.286 16:59:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:16.286 [2024-07-25 16:59:08.745639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.286 [2024-07-25 16:59:08.745813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:08:16.544 [2024-07-25 16:59:08.911517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.802 [2024-07-25 16:59:09.154965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.736 [2024-07-25 16:59:09.981830] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:17.736 request: 00:08:17.736 { 00:08:17.736 "trtype": "tcp", 00:08:17.736 "method": "nvmf_get_transports", 00:08:17.736 "req_id": 1 00:08:17.736 } 00:08:17.736 Got JSON-RPC error response 00:08:17.736 response: 00:08:17.736 { 00:08:17.736 "code": -19, 00:08:17.736 "message": "No such device" 00:08:17.736 } 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.736 [2024-07-25 16:59:09.990008] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.736 16:59:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.736 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.736 16:59:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:17.736 { 00:08:17.736 "subsystems": [ 00:08:17.736 { 00:08:17.736 "subsystem": "keyring", 00:08:17.736 "config": [] 00:08:17.736 }, 00:08:17.736 { 00:08:17.736 "subsystem": "iobuf", 00:08:17.736 "config": [ 00:08:17.736 { 00:08:17.736 "method": "iobuf_set_options", 00:08:17.736 "params": { 00:08:17.736 "small_pool_count": 8192, 00:08:17.736 "large_pool_count": 1024, 00:08:17.736 "small_bufsize": 8192, 00:08:17.736 "large_bufsize": 135168 00:08:17.736 } 00:08:17.736 } 00:08:17.736 ] 00:08:17.736 }, 00:08:17.736 { 00:08:17.736 "subsystem": "sock", 00:08:17.736 "config": [ 00:08:17.736 { 00:08:17.736 "method": "sock_set_default_impl", 00:08:17.736 "params": { 00:08:17.736 "impl_name": "posix" 00:08:17.736 } 00:08:17.736 }, 00:08:17.736 { 00:08:17.736 "method": "sock_impl_set_options", 00:08:17.736 "params": { 00:08:17.736 "impl_name": "ssl", 00:08:17.736 "recv_buf_size": 4096, 00:08:17.736 "send_buf_size": 4096, 00:08:17.736 "enable_recv_pipe": true, 00:08:17.736 "enable_quickack": false, 00:08:17.736 "enable_placement_id": 0, 00:08:17.736 "enable_zerocopy_send_server": true, 00:08:17.736 "enable_zerocopy_send_client": false, 00:08:17.736 "zerocopy_threshold": 0, 00:08:17.736 "tls_version": 0, 00:08:17.736 "enable_ktls": false 00:08:17.736 } 00:08:17.736 }, 00:08:17.736 { 00:08:17.736 "method": "sock_impl_set_options", 00:08:17.736 "params": { 00:08:17.736 "impl_name": "posix", 00:08:17.736 "recv_buf_size": 2097152, 00:08:17.736 "send_buf_size": 2097152, 00:08:17.736 "enable_recv_pipe": true, 00:08:17.736 "enable_quickack": false, 00:08:17.736 "enable_placement_id": 0, 00:08:17.737 "enable_zerocopy_send_server": true, 00:08:17.737 "enable_zerocopy_send_client": false, 00:08:17.737 "zerocopy_threshold": 0, 00:08:17.737 "tls_version": 0, 00:08:17.737 "enable_ktls": false 00:08:17.737 } 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "vmd", 00:08:17.737 "config": [] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "accel", 00:08:17.737 "config": [ 00:08:17.737 { 00:08:17.737 "method": "accel_set_options", 00:08:17.737 "params": { 00:08:17.737 "small_cache_size": 128, 00:08:17.737 "large_cache_size": 16, 00:08:17.737 "task_count": 2048, 00:08:17.737 "sequence_count": 2048, 00:08:17.737 "buf_count": 2048 00:08:17.737 } 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "bdev", 00:08:17.737 "config": [ 00:08:17.737 { 00:08:17.737 "method": "bdev_set_options", 00:08:17.737 "params": { 00:08:17.737 "bdev_io_pool_size": 65535, 00:08:17.737 "bdev_io_cache_size": 256, 00:08:17.737 "bdev_auto_examine": true, 00:08:17.737 "iobuf_small_cache_size": 128, 00:08:17.737 "iobuf_large_cache_size": 16 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "bdev_raid_set_options", 00:08:17.737 "params": { 00:08:17.737 "process_window_size_kb": 1024, 00:08:17.737 "process_max_bandwidth_mb_sec": 0 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "bdev_iscsi_set_options", 00:08:17.737 "params": { 00:08:17.737 "timeout_sec": 30 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "bdev_nvme_set_options", 00:08:17.737 "params": { 00:08:17.737 "action_on_timeout": "none", 00:08:17.737 "timeout_us": 0, 00:08:17.737 "timeout_admin_us": 0, 00:08:17.737 "keep_alive_timeout_ms": 10000, 00:08:17.737 "arbitration_burst": 0, 00:08:17.737 "low_priority_weight": 0, 00:08:17.737 "medium_priority_weight": 0, 00:08:17.737 "high_priority_weight": 0, 00:08:17.737 "nvme_adminq_poll_period_us": 10000, 00:08:17.737 "nvme_ioq_poll_period_us": 0, 00:08:17.737 "io_queue_requests": 0, 00:08:17.737 "delay_cmd_submit": true, 00:08:17.737 "transport_retry_count": 4, 00:08:17.737 "bdev_retry_count": 3, 00:08:17.737 "transport_ack_timeout": 0, 00:08:17.737 "ctrlr_loss_timeout_sec": 0, 00:08:17.737 "reconnect_delay_sec": 0, 00:08:17.737 "fast_io_fail_timeout_sec": 0, 00:08:17.737 "disable_auto_failback": false, 00:08:17.737 "generate_uuids": false, 00:08:17.737 "transport_tos": 0, 00:08:17.737 "nvme_error_stat": false, 00:08:17.737 "rdma_srq_size": 0, 00:08:17.737 "io_path_stat": false, 00:08:17.737 "allow_accel_sequence": false, 00:08:17.737 "rdma_max_cq_size": 0, 00:08:17.737 "rdma_cm_event_timeout_ms": 0, 00:08:17.737 "dhchap_digests": [ 00:08:17.737 "sha256", 00:08:17.737 "sha384", 00:08:17.737 "sha512" 00:08:17.737 ], 00:08:17.737 "dhchap_dhgroups": [ 00:08:17.737 "null", 00:08:17.737 "ffdhe2048", 00:08:17.737 "ffdhe3072", 00:08:17.737 "ffdhe4096", 00:08:17.737 "ffdhe6144", 00:08:17.737 "ffdhe8192" 00:08:17.737 ] 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "bdev_nvme_set_hotplug", 00:08:17.737 "params": { 00:08:17.737 "period_us": 100000, 00:08:17.737 "enable": false 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "bdev_wait_for_examine" 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "scsi", 00:08:17.737 "config": null 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "scheduler", 00:08:17.737 "config": [ 00:08:17.737 { 00:08:17.737 "method": "framework_set_scheduler", 00:08:17.737 "params": { 00:08:17.737 "name": "static" 00:08:17.737 } 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "vhost_scsi", 00:08:17.737 "config": [] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "vhost_blk", 00:08:17.737 "config": [] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "ublk", 00:08:17.737 "config": [] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "nbd", 00:08:17.737 "config": [] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "nvmf", 00:08:17.737 "config": [ 00:08:17.737 { 00:08:17.737 "method": "nvmf_set_config", 00:08:17.737 "params": { 00:08:17.737 "discovery_filter": "match_any", 00:08:17.737 "admin_cmd_passthru": { 00:08:17.737 "identify_ctrlr": false 00:08:17.737 } 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "nvmf_set_max_subsystems", 00:08:17.737 "params": { 00:08:17.737 "max_subsystems": 1024 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "nvmf_set_crdt", 00:08:17.737 "params": { 00:08:17.737 "crdt1": 0, 00:08:17.737 "crdt2": 0, 00:08:17.737 "crdt3": 0 00:08:17.737 } 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "method": "nvmf_create_transport", 00:08:17.737 "params": { 00:08:17.737 "trtype": "TCP", 00:08:17.737 "max_queue_depth": 128, 00:08:17.737 "max_io_qpairs_per_ctrlr": 127, 00:08:17.737 "in_capsule_data_size": 4096, 00:08:17.737 "max_io_size": 131072, 00:08:17.737 "io_unit_size": 131072, 00:08:17.737 "max_aq_depth": 128, 00:08:17.737 "num_shared_buffers": 511, 00:08:17.737 "buf_cache_size": 4294967295, 00:08:17.737 "dif_insert_or_strip": false, 00:08:17.737 "zcopy": false, 00:08:17.737 "c2h_success": true, 00:08:17.737 "sock_priority": 0, 00:08:17.737 "abort_timeout_sec": 1, 00:08:17.737 "ack_timeout": 0, 00:08:17.737 "data_wr_pool_size": 0 00:08:17.737 } 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 }, 00:08:17.737 { 00:08:17.737 "subsystem": "iscsi", 00:08:17.737 "config": [ 00:08:17.737 { 00:08:17.737 "method": "iscsi_set_options", 00:08:17.737 "params": { 00:08:17.737 "node_base": "iqn.2016-06.io.spdk", 00:08:17.737 "max_sessions": 128, 00:08:17.737 "max_connections_per_session": 2, 00:08:17.737 "max_queue_depth": 64, 00:08:17.737 "default_time2wait": 2, 00:08:17.737 "default_time2retain": 20, 00:08:17.737 "first_burst_length": 8192, 00:08:17.737 "immediate_data": true, 00:08:17.737 "allow_duplicated_isid": false, 00:08:17.737 "error_recovery_level": 0, 00:08:17.737 "nop_timeout": 60, 00:08:17.737 "nop_in_interval": 30, 00:08:17.737 "disable_chap": false, 00:08:17.737 "require_chap": false, 00:08:17.737 "mutual_chap": false, 00:08:17.737 "chap_group": 0, 00:08:17.737 "max_large_datain_per_connection": 64, 00:08:17.737 "max_r2t_per_connection": 4, 00:08:17.737 "pdu_pool_size": 36864, 00:08:17.737 "immediate_data_pool_size": 16384, 00:08:17.737 "data_out_pool_size": 2048 00:08:17.737 } 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 } 00:08:17.737 ] 00:08:17.737 } 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62637 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62637 ']' 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62637 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62637 00:08:17.737 killing process with pid 62637 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62637' 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62637 00:08:17.737 16:59:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62637 00:08:20.290 16:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62693 00:08:20.290 16:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:20.290 16:59:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:25.618 16:59:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62693 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62693 ']' 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62693 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62693 00:08:25.619 killing process with pid 62693 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62693' 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62693 00:08:25.619 16:59:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62693 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:28.148 ************************************ 00:08:28.148 END TEST skip_rpc_with_json 00:08:28.148 ************************************ 00:08:28.148 00:08:28.148 real 0m11.616s 00:08:28.148 user 0m10.884s 00:08:28.148 sys 0m1.089s 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:28.148 16:59:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.148 ************************************ 00:08:28.148 START TEST skip_rpc_with_delay 00:08:28.148 ************************************ 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:28.148 [2024-07-25 16:59:20.435398] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:28.148 [2024-07-25 16:59:20.435614] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.148 00:08:28.148 real 0m0.198s 00:08:28.148 user 0m0.110s 00:08:28.148 sys 0m0.086s 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.148 16:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:28.148 ************************************ 00:08:28.148 END TEST skip_rpc_with_delay 00:08:28.148 ************************************ 00:08:28.148 16:59:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:28.148 16:59:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:28.148 16:59:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.148 16:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.148 ************************************ 00:08:28.148 START TEST exit_on_failed_rpc_init 00:08:28.148 ************************************ 00:08:28.148 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:28.148 16:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62826 00:08:28.148 16:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62826 00:08:28.148 16:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:28.148 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62826 ']' 00:08:28.149 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.149 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.149 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.149 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.149 16:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:28.407 [2024-07-25 16:59:20.683546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:28.407 [2024-07-25 16:59:20.683759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:08:28.407 [2024-07-25 16:59:20.857155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.973 [2024-07-25 16:59:21.174937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:29.560 16:59:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.819 [2024-07-25 16:59:22.125917] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:29.819 [2024-07-25 16:59:22.126121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:08:30.077 [2024-07-25 16:59:22.297304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.334 [2024-07-25 16:59:22.647806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.334 [2024-07-25 16:59:22.647997] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:30.334 [2024-07-25 16:59:22.648035] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:30.334 [2024-07-25 16:59:22.648058] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62826 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62826 ']' 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62826 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62826 00:08:30.901 killing process with pid 62826 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62826' 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62826 00:08:30.901 16:59:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62826 00:08:33.427 00:08:33.427 real 0m5.125s 00:08:33.427 user 0m5.959s 00:08:33.427 sys 0m0.671s 00:08:33.427 16:59:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.427 ************************************ 00:08:33.427 END TEST exit_on_failed_rpc_init 00:08:33.427 16:59:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:33.427 ************************************ 00:08:33.427 16:59:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:33.427 ************************************ 00:08:33.427 END TEST skip_rpc 00:08:33.427 ************************************ 00:08:33.427 00:08:33.427 real 0m24.501s 00:08:33.427 user 0m23.657s 00:08:33.427 sys 0m2.575s 00:08:33.427 16:59:25 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.427 16:59:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.427 16:59:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:33.427 16:59:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.427 16:59:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.427 16:59:25 -- common/autotest_common.sh@10 -- # set +x 00:08:33.427 ************************************ 00:08:33.427 START TEST rpc_client 00:08:33.427 ************************************ 00:08:33.427 16:59:25 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:33.427 * Looking for test storage... 00:08:33.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:33.427 16:59:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:33.684 OK 00:08:33.684 16:59:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:33.684 00:08:33.684 real 0m0.154s 00:08:33.684 user 0m0.072s 00:08:33.684 sys 0m0.086s 00:08:33.684 16:59:25 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.684 16:59:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:33.684 ************************************ 00:08:33.684 END TEST rpc_client 00:08:33.684 ************************************ 00:08:33.684 16:59:25 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:33.684 16:59:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.684 16:59:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.684 16:59:25 -- common/autotest_common.sh@10 -- # set +x 00:08:33.684 ************************************ 00:08:33.684 START TEST json_config 00:08:33.684 ************************************ 00:08:33.684 16:59:25 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:33.684 16:59:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.684 16:59:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:33.684 16:59:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.684 16:59:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.684 16:59:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.684 16:59:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.685 16:59:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.685 16:59:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.685 16:59:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.685 16:59:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.685 16:59:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.685 16:59:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.685 16:59:26 json_config -- paths/export.sh@5 -- # export PATH 00:08:33.685 16:59:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@47 -- # : 0 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.685 16:59:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:33.685 WARNING: No tests are enabled so not running JSON configuration tests 00:08:33.685 16:59:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:33.685 00:08:33.685 real 0m0.082s 00:08:33.685 user 0m0.036s 00:08:33.685 sys 0m0.043s 00:08:33.685 16:59:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.685 ************************************ 00:08:33.685 END TEST json_config 00:08:33.685 ************************************ 00:08:33.685 16:59:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:33.685 16:59:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.685 16:59:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.685 16:59:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.685 16:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:33.685 ************************************ 00:08:33.685 START TEST json_config_extra_key 00:08:33.685 ************************************ 00:08:33.685 16:59:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d53ff83c-e09d-46d2-8b9f-dee7617ec69c 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.944 16:59:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.944 16:59:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.944 16:59:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.944 16:59:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.944 16:59:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.944 16:59:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.944 16:59:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:33.944 16:59:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.944 16:59:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:33.944 INFO: launching applications... 00:08:33.944 16:59:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:33.945 Waiting for target to run... 00:08:33.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63036 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63036 /var/tmp/spdk_tgt.sock 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 63036 ']' 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.945 16:59:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:33.945 16:59:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:33.945 [2024-07-25 16:59:26.318593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.945 [2024-07-25 16:59:26.318791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63036 ] 00:08:34.511 [2024-07-25 16:59:26.816756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.769 [2024-07-25 16:59:27.069906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.335 00:08:35.335 INFO: shutting down applications... 00:08:35.335 16:59:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.335 16:59:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:35.335 16:59:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:35.335 16:59:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63036 ]] 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63036 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:35.335 16:59:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:35.336 16:59:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:35.336 16:59:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:35.901 16:59:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:35.901 16:59:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:35.902 16:59:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:35.902 16:59:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:36.467 16:59:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:36.467 16:59:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.467 16:59:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:36.467 16:59:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.033 16:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.033 16:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.033 16:59:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:37.033 16:59:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.600 16:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.600 16:59:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.600 16:59:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:37.600 16:59:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.858 16:59:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.858 16:59:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.858 16:59:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:37.858 16:59:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63036 00:08:38.426 SPDK target shutdown done 00:08:38.426 Success 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:38.426 16:59:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:38.426 16:59:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:38.426 00:08:38.426 real 0m4.689s 00:08:38.426 user 0m4.393s 00:08:38.426 sys 0m0.641s 00:08:38.426 ************************************ 00:08:38.426 END TEST json_config_extra_key 00:08:38.426 ************************************ 00:08:38.426 16:59:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.426 16:59:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:38.426 16:59:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:38.426 16:59:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.426 16:59:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.426 16:59:30 -- common/autotest_common.sh@10 -- # set +x 00:08:38.426 ************************************ 00:08:38.426 START TEST alias_rpc 00:08:38.426 ************************************ 00:08:38.426 16:59:30 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:38.686 * Looking for test storage... 00:08:38.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:38.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.686 16:59:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:38.686 16:59:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63142 00:08:38.686 16:59:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63142 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 63142 ']' 00:08:38.686 16:59:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.686 16:59:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.686 [2024-07-25 16:59:31.035566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:38.686 [2024-07-25 16:59:31.035738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63142 ] 00:08:38.948 [2024-07-25 16:59:31.201785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.206 [2024-07-25 16:59:31.501209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.141 16:59:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.141 16:59:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:40.141 16:59:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:40.399 16:59:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63142 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 63142 ']' 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 63142 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63142 00:08:40.399 killing process with pid 63142 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63142' 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 63142 00:08:40.399 16:59:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 63142 00:08:42.933 ************************************ 00:08:42.933 END TEST alias_rpc 00:08:42.933 ************************************ 00:08:42.933 00:08:42.933 real 0m4.273s 00:08:42.933 user 0m4.373s 00:08:42.933 sys 0m0.642s 00:08:42.933 16:59:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.933 16:59:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.933 16:59:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:42.933 16:59:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.933 16:59:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.933 16:59:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.933 16:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.933 ************************************ 00:08:42.933 START TEST spdkcli_tcp 00:08:42.933 ************************************ 00:08:42.933 16:59:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.933 * Looking for test storage... 00:08:42.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:42.933 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:42.933 16:59:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.933 16:59:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.934 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63246 00:08:42.934 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63246 00:08:42.934 16:59:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 63246 ']' 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.934 16:59:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.191 [2024-07-25 16:59:35.411658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:43.191 [2024-07-25 16:59:35.411853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63246 ] 00:08:43.191 [2024-07-25 16:59:35.583053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.448 [2024-07-25 16:59:35.826033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.448 [2024-07-25 16:59:35.826045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.382 16:59:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.382 16:59:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:44.382 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63263 00:08:44.382 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:44.382 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:44.640 [ 00:08:44.640 "bdev_malloc_delete", 00:08:44.640 "bdev_malloc_create", 00:08:44.640 "bdev_null_resize", 00:08:44.640 "bdev_null_delete", 00:08:44.640 "bdev_null_create", 00:08:44.640 "bdev_nvme_cuse_unregister", 00:08:44.640 "bdev_nvme_cuse_register", 00:08:44.640 "bdev_opal_new_user", 00:08:44.640 "bdev_opal_set_lock_state", 00:08:44.640 "bdev_opal_delete", 00:08:44.640 "bdev_opal_get_info", 00:08:44.640 "bdev_opal_create", 00:08:44.640 "bdev_nvme_opal_revert", 00:08:44.641 "bdev_nvme_opal_init", 00:08:44.641 "bdev_nvme_send_cmd", 00:08:44.641 "bdev_nvme_get_path_iostat", 00:08:44.641 "bdev_nvme_get_mdns_discovery_info", 00:08:44.641 "bdev_nvme_stop_mdns_discovery", 00:08:44.641 "bdev_nvme_start_mdns_discovery", 00:08:44.641 "bdev_nvme_set_multipath_policy", 00:08:44.641 "bdev_nvme_set_preferred_path", 00:08:44.641 "bdev_nvme_get_io_paths", 00:08:44.641 "bdev_nvme_remove_error_injection", 00:08:44.641 "bdev_nvme_add_error_injection", 00:08:44.641 "bdev_nvme_get_discovery_info", 00:08:44.641 "bdev_nvme_stop_discovery", 00:08:44.641 "bdev_nvme_start_discovery", 00:08:44.641 "bdev_nvme_get_controller_health_info", 00:08:44.641 "bdev_nvme_disable_controller", 00:08:44.641 "bdev_nvme_enable_controller", 00:08:44.641 "bdev_nvme_reset_controller", 00:08:44.641 "bdev_nvme_get_transport_statistics", 00:08:44.641 "bdev_nvme_apply_firmware", 00:08:44.641 "bdev_nvme_detach_controller", 00:08:44.641 "bdev_nvme_get_controllers", 00:08:44.641 "bdev_nvme_attach_controller", 00:08:44.641 "bdev_nvme_set_hotplug", 00:08:44.641 "bdev_nvme_set_options", 00:08:44.641 "bdev_passthru_delete", 00:08:44.641 "bdev_passthru_create", 00:08:44.641 "bdev_lvol_set_parent_bdev", 00:08:44.641 "bdev_lvol_set_parent", 00:08:44.641 "bdev_lvol_check_shallow_copy", 00:08:44.641 "bdev_lvol_start_shallow_copy", 00:08:44.641 "bdev_lvol_grow_lvstore", 00:08:44.641 "bdev_lvol_get_lvols", 00:08:44.641 "bdev_lvol_get_lvstores", 00:08:44.641 "bdev_lvol_delete", 00:08:44.641 "bdev_lvol_set_read_only", 00:08:44.641 "bdev_lvol_resize", 00:08:44.641 "bdev_lvol_decouple_parent", 00:08:44.641 "bdev_lvol_inflate", 00:08:44.641 "bdev_lvol_rename", 00:08:44.641 "bdev_lvol_clone_bdev", 00:08:44.641 "bdev_lvol_clone", 00:08:44.641 "bdev_lvol_snapshot", 00:08:44.641 "bdev_lvol_create", 00:08:44.641 "bdev_lvol_delete_lvstore", 00:08:44.641 "bdev_lvol_rename_lvstore", 00:08:44.641 "bdev_lvol_create_lvstore", 00:08:44.641 "bdev_raid_set_options", 00:08:44.641 "bdev_raid_remove_base_bdev", 00:08:44.641 "bdev_raid_add_base_bdev", 00:08:44.641 "bdev_raid_delete", 00:08:44.641 "bdev_raid_create", 00:08:44.641 "bdev_raid_get_bdevs", 00:08:44.641 "bdev_error_inject_error", 00:08:44.641 "bdev_error_delete", 00:08:44.641 "bdev_error_create", 00:08:44.641 "bdev_split_delete", 00:08:44.641 "bdev_split_create", 00:08:44.641 "bdev_delay_delete", 00:08:44.641 "bdev_delay_create", 00:08:44.641 "bdev_delay_update_latency", 00:08:44.641 "bdev_zone_block_delete", 00:08:44.641 "bdev_zone_block_create", 00:08:44.641 "blobfs_create", 00:08:44.641 "blobfs_detect", 00:08:44.641 "blobfs_set_cache_size", 00:08:44.641 "bdev_xnvme_delete", 00:08:44.641 "bdev_xnvme_create", 00:08:44.641 "bdev_aio_delete", 00:08:44.641 "bdev_aio_rescan", 00:08:44.641 "bdev_aio_create", 00:08:44.641 "bdev_ftl_set_property", 00:08:44.641 "bdev_ftl_get_properties", 00:08:44.641 "bdev_ftl_get_stats", 00:08:44.641 "bdev_ftl_unmap", 00:08:44.641 "bdev_ftl_unload", 00:08:44.641 "bdev_ftl_delete", 00:08:44.641 "bdev_ftl_load", 00:08:44.641 "bdev_ftl_create", 00:08:44.641 "bdev_virtio_attach_controller", 00:08:44.641 "bdev_virtio_scsi_get_devices", 00:08:44.641 "bdev_virtio_detach_controller", 00:08:44.641 "bdev_virtio_blk_set_hotplug", 00:08:44.641 "bdev_iscsi_delete", 00:08:44.641 "bdev_iscsi_create", 00:08:44.641 "bdev_iscsi_set_options", 00:08:44.641 "accel_error_inject_error", 00:08:44.641 "ioat_scan_accel_module", 00:08:44.641 "dsa_scan_accel_module", 00:08:44.641 "iaa_scan_accel_module", 00:08:44.641 "keyring_file_remove_key", 00:08:44.641 "keyring_file_add_key", 00:08:44.641 "keyring_linux_set_options", 00:08:44.641 "iscsi_get_histogram", 00:08:44.641 "iscsi_enable_histogram", 00:08:44.641 "iscsi_set_options", 00:08:44.641 "iscsi_get_auth_groups", 00:08:44.641 "iscsi_auth_group_remove_secret", 00:08:44.641 "iscsi_auth_group_add_secret", 00:08:44.641 "iscsi_delete_auth_group", 00:08:44.641 "iscsi_create_auth_group", 00:08:44.641 "iscsi_set_discovery_auth", 00:08:44.641 "iscsi_get_options", 00:08:44.641 "iscsi_target_node_request_logout", 00:08:44.641 "iscsi_target_node_set_redirect", 00:08:44.641 "iscsi_target_node_set_auth", 00:08:44.641 "iscsi_target_node_add_lun", 00:08:44.641 "iscsi_get_stats", 00:08:44.641 "iscsi_get_connections", 00:08:44.641 "iscsi_portal_group_set_auth", 00:08:44.641 "iscsi_start_portal_group", 00:08:44.641 "iscsi_delete_portal_group", 00:08:44.641 "iscsi_create_portal_group", 00:08:44.641 "iscsi_get_portal_groups", 00:08:44.641 "iscsi_delete_target_node", 00:08:44.641 "iscsi_target_node_remove_pg_ig_maps", 00:08:44.641 "iscsi_target_node_add_pg_ig_maps", 00:08:44.641 "iscsi_create_target_node", 00:08:44.641 "iscsi_get_target_nodes", 00:08:44.641 "iscsi_delete_initiator_group", 00:08:44.641 "iscsi_initiator_group_remove_initiators", 00:08:44.641 "iscsi_initiator_group_add_initiators", 00:08:44.641 "iscsi_create_initiator_group", 00:08:44.641 "iscsi_get_initiator_groups", 00:08:44.641 "nvmf_set_crdt", 00:08:44.641 "nvmf_set_config", 00:08:44.641 "nvmf_set_max_subsystems", 00:08:44.641 "nvmf_stop_mdns_prr", 00:08:44.641 "nvmf_publish_mdns_prr", 00:08:44.641 "nvmf_subsystem_get_listeners", 00:08:44.641 "nvmf_subsystem_get_qpairs", 00:08:44.641 "nvmf_subsystem_get_controllers", 00:08:44.641 "nvmf_get_stats", 00:08:44.641 "nvmf_get_transports", 00:08:44.641 "nvmf_create_transport", 00:08:44.641 "nvmf_get_targets", 00:08:44.641 "nvmf_delete_target", 00:08:44.641 "nvmf_create_target", 00:08:44.641 "nvmf_subsystem_allow_any_host", 00:08:44.641 "nvmf_subsystem_remove_host", 00:08:44.641 "nvmf_subsystem_add_host", 00:08:44.641 "nvmf_ns_remove_host", 00:08:44.641 "nvmf_ns_add_host", 00:08:44.641 "nvmf_subsystem_remove_ns", 00:08:44.641 "nvmf_subsystem_add_ns", 00:08:44.641 "nvmf_subsystem_listener_set_ana_state", 00:08:44.641 "nvmf_discovery_get_referrals", 00:08:44.641 "nvmf_discovery_remove_referral", 00:08:44.641 "nvmf_discovery_add_referral", 00:08:44.641 "nvmf_subsystem_remove_listener", 00:08:44.641 "nvmf_subsystem_add_listener", 00:08:44.641 "nvmf_delete_subsystem", 00:08:44.641 "nvmf_create_subsystem", 00:08:44.641 "nvmf_get_subsystems", 00:08:44.641 "env_dpdk_get_mem_stats", 00:08:44.641 "nbd_get_disks", 00:08:44.641 "nbd_stop_disk", 00:08:44.641 "nbd_start_disk", 00:08:44.641 "ublk_recover_disk", 00:08:44.641 "ublk_get_disks", 00:08:44.641 "ublk_stop_disk", 00:08:44.641 "ublk_start_disk", 00:08:44.641 "ublk_destroy_target", 00:08:44.641 "ublk_create_target", 00:08:44.641 "virtio_blk_create_transport", 00:08:44.641 "virtio_blk_get_transports", 00:08:44.641 "vhost_controller_set_coalescing", 00:08:44.641 "vhost_get_controllers", 00:08:44.641 "vhost_delete_controller", 00:08:44.641 "vhost_create_blk_controller", 00:08:44.641 "vhost_scsi_controller_remove_target", 00:08:44.641 "vhost_scsi_controller_add_target", 00:08:44.641 "vhost_start_scsi_controller", 00:08:44.641 "vhost_create_scsi_controller", 00:08:44.641 "thread_set_cpumask", 00:08:44.641 "framework_get_governor", 00:08:44.641 "framework_get_scheduler", 00:08:44.641 "framework_set_scheduler", 00:08:44.641 "framework_get_reactors", 00:08:44.641 "thread_get_io_channels", 00:08:44.641 "thread_get_pollers", 00:08:44.641 "thread_get_stats", 00:08:44.641 "framework_monitor_context_switch", 00:08:44.641 "spdk_kill_instance", 00:08:44.641 "log_enable_timestamps", 00:08:44.641 "log_get_flags", 00:08:44.641 "log_clear_flag", 00:08:44.641 "log_set_flag", 00:08:44.641 "log_get_level", 00:08:44.641 "log_set_level", 00:08:44.641 "log_get_print_level", 00:08:44.641 "log_set_print_level", 00:08:44.641 "framework_enable_cpumask_locks", 00:08:44.641 "framework_disable_cpumask_locks", 00:08:44.641 "framework_wait_init", 00:08:44.641 "framework_start_init", 00:08:44.641 "scsi_get_devices", 00:08:44.641 "bdev_get_histogram", 00:08:44.641 "bdev_enable_histogram", 00:08:44.641 "bdev_set_qos_limit", 00:08:44.641 "bdev_set_qd_sampling_period", 00:08:44.641 "bdev_get_bdevs", 00:08:44.641 "bdev_reset_iostat", 00:08:44.641 "bdev_get_iostat", 00:08:44.641 "bdev_examine", 00:08:44.641 "bdev_wait_for_examine", 00:08:44.641 "bdev_set_options", 00:08:44.641 "notify_get_notifications", 00:08:44.641 "notify_get_types", 00:08:44.641 "accel_get_stats", 00:08:44.641 "accel_set_options", 00:08:44.641 "accel_set_driver", 00:08:44.641 "accel_crypto_key_destroy", 00:08:44.641 "accel_crypto_keys_get", 00:08:44.641 "accel_crypto_key_create", 00:08:44.641 "accel_assign_opc", 00:08:44.641 "accel_get_module_info", 00:08:44.641 "accel_get_opc_assignments", 00:08:44.641 "vmd_rescan", 00:08:44.641 "vmd_remove_device", 00:08:44.641 "vmd_enable", 00:08:44.641 "sock_get_default_impl", 00:08:44.641 "sock_set_default_impl", 00:08:44.641 "sock_impl_set_options", 00:08:44.641 "sock_impl_get_options", 00:08:44.641 "iobuf_get_stats", 00:08:44.641 "iobuf_set_options", 00:08:44.641 "framework_get_pci_devices", 00:08:44.641 "framework_get_config", 00:08:44.641 "framework_get_subsystems", 00:08:44.641 "trace_get_info", 00:08:44.641 "trace_get_tpoint_group_mask", 00:08:44.641 "trace_disable_tpoint_group", 00:08:44.641 "trace_enable_tpoint_group", 00:08:44.642 "trace_clear_tpoint_mask", 00:08:44.642 "trace_set_tpoint_mask", 00:08:44.642 "keyring_get_keys", 00:08:44.642 "spdk_get_version", 00:08:44.642 "rpc_get_methods" 00:08:44.642 ] 00:08:44.642 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:44.642 16:59:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63246 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 63246 ']' 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 63246 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.642 16:59:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63246 00:08:44.642 killing process with pid 63246 00:08:44.642 16:59:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.642 16:59:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.642 16:59:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63246' 00:08:44.642 16:59:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 63246 00:08:44.642 16:59:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 63246 00:08:47.169 ************************************ 00:08:47.169 END TEST spdkcli_tcp 00:08:47.169 ************************************ 00:08:47.169 00:08:47.169 real 0m4.111s 00:08:47.169 user 0m7.203s 00:08:47.169 sys 0m0.649s 00:08:47.169 16:59:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.169 16:59:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.169 16:59:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:47.169 16:59:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.169 16:59:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.169 16:59:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.169 ************************************ 00:08:47.169 START TEST dpdk_mem_utility 00:08:47.169 ************************************ 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:47.169 * Looking for test storage... 00:08:47.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:47.169 16:59:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:47.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.169 16:59:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63360 00:08:47.169 16:59:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63360 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63360 ']' 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.169 16:59:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.169 16:59:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:47.169 [2024-07-25 16:59:39.544232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:47.169 [2024-07-25 16:59:39.544428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:08:47.427 [2024-07-25 16:59:39.723426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.685 [2024-07-25 16:59:40.004987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.622 16:59:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.622 16:59:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:48.622 16:59:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:48.622 16:59:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:48.622 16:59:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.622 16:59:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:48.622 { 00:08:48.622 "filename": "/tmp/spdk_mem_dump.txt" 00:08:48.622 } 00:08:48.622 16:59:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.622 16:59:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:48.622 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:48.622 1 heaps totaling size 820.000000 MiB 00:08:48.622 size: 820.000000 MiB heap id: 0 00:08:48.622 end heaps---------- 00:08:48.622 8 mempools totaling size 598.116089 MiB 00:08:48.622 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:48.622 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:48.622 size: 84.521057 MiB name: bdev_io_63360 00:08:48.622 size: 51.011292 MiB name: evtpool_63360 00:08:48.622 size: 50.003479 MiB name: msgpool_63360 00:08:48.622 size: 21.763794 MiB name: PDU_Pool 00:08:48.622 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:48.622 size: 0.026123 MiB name: Session_Pool 00:08:48.622 end mempools------- 00:08:48.622 6 memzones totaling size 4.142822 MiB 00:08:48.622 size: 1.000366 MiB name: RG_ring_0_63360 00:08:48.622 size: 1.000366 MiB name: RG_ring_1_63360 00:08:48.622 size: 1.000366 MiB name: RG_ring_4_63360 00:08:48.622 size: 1.000366 MiB name: RG_ring_5_63360 00:08:48.622 size: 0.125366 MiB name: RG_ring_2_63360 00:08:48.622 size: 0.015991 MiB name: RG_ring_3_63360 00:08:48.622 end memzones------- 00:08:48.622 16:59:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:48.622 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:08:48.622 list of free elements. size: 18.452271 MiB 00:08:48.622 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:48.622 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:48.622 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:48.622 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:48.622 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:48.622 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:48.622 element at address: 0x200019600000 with size: 0.999084 MiB 00:08:48.622 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:48.622 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:48.622 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:48.622 element at address: 0x200019900040 with size: 0.936401 MiB 00:08:48.622 element at address: 0x200000200000 with size: 0.830200 MiB 00:08:48.622 element at address: 0x20001b000000 with size: 0.564880 MiB 00:08:48.622 element at address: 0x200019200000 with size: 0.487976 MiB 00:08:48.622 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:48.622 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:48.622 element at address: 0x200028400000 with size: 0.390442 MiB 00:08:48.622 element at address: 0x200003a00000 with size: 0.351990 MiB 00:08:48.622 list of standard malloc elements. size: 199.283325 MiB 00:08:48.622 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:48.622 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:48.622 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:48.622 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:48.622 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:48.622 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:48.622 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:48.622 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:48.622 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:48.622 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:08:48.622 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:48.622 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:48.622 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:08:48.622 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200028463f40 with size: 0.000244 MiB 00:08:48.623 element at address: 0x200028464040 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:08:48.623 element at address: 0x20002846af80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b080 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b180 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b280 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b380 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b480 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b580 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b680 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b780 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b880 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846b980 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:48.624 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:48.624 list of memzone associated elements. size: 602.264404 MiB 00:08:48.624 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:48.624 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:48.624 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:48.624 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:48.624 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:48.624 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63360_0 00:08:48.624 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:48.624 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63360_0 00:08:48.624 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:48.624 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63360_0 00:08:48.624 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:48.624 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:48.624 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:48.624 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:48.624 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:48.624 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63360 00:08:48.624 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:48.624 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63360 00:08:48.624 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:48.624 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63360 00:08:48.624 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:48.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:48.624 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:48.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:48.624 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:48.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:48.624 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:48.624 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:48.624 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:48.624 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63360 00:08:48.624 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:48.624 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63360 00:08:48.624 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:48.624 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63360 00:08:48.624 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:48.624 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63360 00:08:48.624 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:48.624 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63360 00:08:48.624 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:48.624 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:48.624 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:48.624 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:48.624 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:48.624 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:48.624 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:48.624 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63360 00:08:48.624 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:48.624 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:48.624 element at address: 0x200028464140 with size: 0.023804 MiB 00:08:48.624 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:48.624 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:48.624 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63360 00:08:48.624 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:08:48.624 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:48.624 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:08:48.624 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63360 00:08:48.624 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:48.624 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63360 00:08:48.624 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:08:48.624 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:48.624 16:59:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:48.624 16:59:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63360 00:08:48.624 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63360 ']' 00:08:48.624 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63360 00:08:48.624 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63360 00:08:48.625 killing process with pid 63360 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63360' 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63360 00:08:48.625 16:59:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63360 00:08:51.156 00:08:51.156 real 0m3.964s 00:08:51.156 user 0m3.901s 00:08:51.156 sys 0m0.615s 00:08:51.156 16:59:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.156 16:59:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:51.156 ************************************ 00:08:51.156 END TEST dpdk_mem_utility 00:08:51.156 ************************************ 00:08:51.156 16:59:43 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:51.156 16:59:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.156 16:59:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.156 16:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:51.156 ************************************ 00:08:51.156 START TEST event 00:08:51.156 ************************************ 00:08:51.156 16:59:43 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:51.156 * Looking for test storage... 00:08:51.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:51.156 16:59:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:51.156 16:59:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:51.156 16:59:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.156 16:59:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:51.156 16:59:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.156 16:59:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.156 ************************************ 00:08:51.156 START TEST event_perf 00:08:51.156 ************************************ 00:08:51.156 16:59:43 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.156 Running I/O for 1 seconds...[2024-07-25 16:59:43.484336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:51.156 [2024-07-25 16:59:43.484493] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63458 ] 00:08:51.415 [2024-07-25 16:59:43.649822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.673 [2024-07-25 16:59:43.903446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.673 [2024-07-25 16:59:43.903576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.673 [2024-07-25 16:59:43.903697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.673 Running I/O for 1 seconds...[2024-07-25 16:59:43.903716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.051 00:08:53.051 lcore 0: 129603 00:08:53.051 lcore 1: 129604 00:08:53.051 lcore 2: 129605 00:08:53.051 lcore 3: 129602 00:08:53.051 done. 00:08:53.051 00:08:53.051 real 0m1.880s 00:08:53.051 user 0m4.606s 00:08:53.051 sys 0m0.138s 00:08:53.051 16:59:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.051 16:59:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:53.051 ************************************ 00:08:53.051 END TEST event_perf 00:08:53.051 ************************************ 00:08:53.051 16:59:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:53.051 16:59:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:53.051 16:59:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.051 16:59:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:53.051 ************************************ 00:08:53.051 START TEST event_reactor 00:08:53.051 ************************************ 00:08:53.051 16:59:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:53.051 [2024-07-25 16:59:45.412718] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:53.051 [2024-07-25 16:59:45.413105] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63503 ] 00:08:53.310 [2024-07-25 16:59:45.595555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.568 [2024-07-25 16:59:45.844748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.944 test_start 00:08:54.944 oneshot 00:08:54.944 tick 100 00:08:54.944 tick 100 00:08:54.944 tick 250 00:08:54.944 tick 100 00:08:54.944 tick 100 00:08:54.944 tick 100 00:08:54.944 tick 250 00:08:54.944 tick 500 00:08:54.944 tick 100 00:08:54.944 tick 100 00:08:54.944 tick 250 00:08:54.944 tick 100 00:08:54.944 tick 100 00:08:54.944 test_end 00:08:54.944 ************************************ 00:08:54.944 END TEST event_reactor 00:08:54.944 ************************************ 00:08:54.944 00:08:54.944 real 0m1.898s 00:08:54.944 user 0m1.653s 00:08:54.944 sys 0m0.128s 00:08:54.944 16:59:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.944 16:59:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:54.944 16:59:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.944 16:59:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:54.944 16:59:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.944 16:59:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:54.944 ************************************ 00:08:54.944 START TEST event_reactor_perf 00:08:54.944 ************************************ 00:08:54.944 16:59:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.944 [2024-07-25 16:59:47.376745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:54.944 [2024-07-25 16:59:47.376933] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:08:55.203 [2024-07-25 16:59:47.555348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.487 [2024-07-25 16:59:47.859928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.876 test_start 00:08:56.876 test_end 00:08:56.876 Performance: 274577 events per second 00:08:56.876 00:08:56.876 real 0m1.970s 00:08:56.876 user 0m1.717s 00:08:56.876 sys 0m0.134s 00:08:56.876 16:59:49 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.876 ************************************ 00:08:56.876 END TEST event_reactor_perf 00:08:56.876 ************************************ 00:08:56.876 16:59:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:57.135 16:59:49 event -- event/event.sh@49 -- # uname -s 00:08:57.135 16:59:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:57.135 16:59:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:57.135 16:59:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.135 16:59:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.135 16:59:49 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.135 ************************************ 00:08:57.135 START TEST event_scheduler 00:08:57.135 ************************************ 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:57.135 * Looking for test storage... 00:08:57.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:57.135 16:59:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:57.135 16:59:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63613 00:08:57.135 16:59:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:57.135 16:59:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63613 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63613 ']' 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.135 16:59:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.135 16:59:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.135 [2024-07-25 16:59:49.565587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:57.135 [2024-07-25 16:59:49.565807] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63613 ] 00:08:57.393 [2024-07-25 16:59:49.750836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.651 [2024-07-25 16:59:50.063906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.651 [2024-07-25 16:59:50.064160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.651 [2024-07-25 16:59:50.065121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.651 [2024-07-25 16:59:50.065134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:58.217 16:59:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:58.217 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:58.217 POWER: Cannot set governor of lcore 0 to userspace 00:08:58.217 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:58.217 POWER: Cannot set governor of lcore 0 to performance 00:08:58.217 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:58.217 POWER: Cannot set governor of lcore 0 to userspace 00:08:58.217 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:58.217 POWER: Cannot set governor of lcore 0 to userspace 00:08:58.217 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:58.217 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:58.217 POWER: Unable to set Power Management Environment for lcore 0 00:08:58.217 [2024-07-25 16:59:50.589918] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:58.217 [2024-07-25 16:59:50.590150] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:58.217 [2024-07-25 16:59:50.590358] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:58.217 [2024-07-25 16:59:50.590555] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:58.217 [2024-07-25 16:59:50.590742] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:58.217 [2024-07-25 16:59:50.590932] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.217 16:59:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.217 16:59:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:58.477 [2024-07-25 16:59:50.921574] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:58.477 16:59:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.477 16:59:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:58.477 16:59:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.477 16:59:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.477 16:59:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:58.477 ************************************ 00:08:58.477 START TEST scheduler_create_thread 00:08:58.477 ************************************ 00:08:58.477 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:58.477 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:58.477 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.477 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 2 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 3 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 4 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 5 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 6 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 7 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 8 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 9 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 10 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.737 16:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.113 16:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.113 16:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:00.113 16:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:00.113 16:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.113 16:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:01.486 ************************************ 00:09:01.486 END TEST scheduler_create_thread 00:09:01.486 ************************************ 00:09:01.486 16:59:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.486 00:09:01.486 real 0m2.620s 00:09:01.486 user 0m0.013s 00:09:01.486 sys 0m0.011s 00:09:01.486 16:59:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.486 16:59:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:01.486 16:59:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:01.486 16:59:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63613 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63613 ']' 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63613 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63613 00:09:01.486 killing process with pid 63613 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63613' 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63613 00:09:01.486 16:59:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63613 00:09:01.744 [2024-07-25 16:59:54.033035] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:03.139 00:09:03.139 real 0m5.932s 00:09:03.139 user 0m9.894s 00:09:03.139 sys 0m0.563s 00:09:03.139 ************************************ 00:09:03.139 END TEST event_scheduler 00:09:03.139 ************************************ 00:09:03.139 16:59:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.139 16:59:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.139 16:59:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:03.139 16:59:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:03.139 16:59:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.139 16:59:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.139 16:59:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:03.139 ************************************ 00:09:03.139 START TEST app_repeat 00:09:03.139 ************************************ 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:03.139 Process app_repeat pid: 63730 00:09:03.139 spdk_app_start Round 0 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63730 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63730' 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:03.139 16:59:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63730 /var/tmp/spdk-nbd.sock 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63730 ']' 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:03.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.139 16:59:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:03.139 [2024-07-25 16:59:55.425443] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:03.139 [2024-07-25 16:59:55.425888] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63730 ] 00:09:03.139 [2024-07-25 16:59:55.605614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:03.705 [2024-07-25 16:59:55.928472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.705 [2024-07-25 16:59:55.928472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.329 16:59:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.329 16:59:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:04.329 16:59:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:04.587 Malloc0 00:09:04.587 16:59:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:04.844 Malloc1 00:09:05.102 16:59:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:05.102 16:59:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.103 16:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:05.361 /dev/nbd0 00:09:05.361 16:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.361 16:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.361 1+0 records in 00:09:05.361 1+0 records out 00:09:05.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439407 s, 9.3 MB/s 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:05.361 16:59:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:05.361 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.361 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.361 16:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:05.619 /dev/nbd1 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:05.619 1+0 records in 00:09:05.619 1+0 records out 00:09:05.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680353 s, 6.0 MB/s 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:05.619 16:59:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.619 16:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.877 16:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.877 { 00:09:05.877 "nbd_device": "/dev/nbd0", 00:09:05.877 "bdev_name": "Malloc0" 00:09:05.877 }, 00:09:05.877 { 00:09:05.877 "nbd_device": "/dev/nbd1", 00:09:05.877 "bdev_name": "Malloc1" 00:09:05.877 } 00:09:05.877 ]' 00:09:05.877 16:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.877 { 00:09:05.877 "nbd_device": "/dev/nbd0", 00:09:05.877 "bdev_name": "Malloc0" 00:09:05.877 }, 00:09:05.877 { 00:09:05.877 "nbd_device": "/dev/nbd1", 00:09:05.877 "bdev_name": "Malloc1" 00:09:05.877 } 00:09:05.877 ]' 00:09:05.877 16:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:06.136 /dev/nbd1' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:06.136 /dev/nbd1' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:06.136 256+0 records in 00:09:06.136 256+0 records out 00:09:06.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743173 s, 141 MB/s 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:06.136 256+0 records in 00:09:06.136 256+0 records out 00:09:06.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329343 s, 31.8 MB/s 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.136 256+0 records in 00:09:06.136 256+0 records out 00:09:06.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398984 s, 26.3 MB/s 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.136 16:59:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.394 16:59:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.653 16:59:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:07.219 16:59:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:07.219 16:59:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:07.477 16:59:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:09.375 [2024-07-25 17:00:01.460262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.375 [2024-07-25 17:00:01.764661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.375 [2024-07-25 17:00:01.764670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.631 [2024-07-25 17:00:01.995995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:09.631 [2024-07-25 17:00:01.996143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:10.564 spdk_app_start Round 1 00:09:10.564 17:00:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:10.564 17:00:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:10.564 17:00:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63730 /var/tmp/spdk-nbd.sock 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63730 ']' 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:10.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.564 17:00:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:10.823 17:00:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.823 17:00:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:10.823 17:00:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.390 Malloc0 00:09:11.390 17:00:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:11.655 Malloc1 00:09:11.656 17:00:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.656 17:00:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:11.656 /dev/nbd0 00:09:11.917 17:00:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.917 17:00:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.917 1+0 records in 00:09:11.917 1+0 records out 00:09:11.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430628 s, 9.5 MB/s 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.917 17:00:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:11.917 17:00:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.917 17:00:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.917 17:00:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:12.178 /dev/nbd1 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:12.178 1+0 records in 00:09:12.178 1+0 records out 00:09:12.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434593 s, 9.4 MB/s 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:12.178 17:00:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.178 17:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:12.435 { 00:09:12.435 "nbd_device": "/dev/nbd0", 00:09:12.435 "bdev_name": "Malloc0" 00:09:12.435 }, 00:09:12.435 { 00:09:12.435 "nbd_device": "/dev/nbd1", 00:09:12.435 "bdev_name": "Malloc1" 00:09:12.435 } 00:09:12.435 ]' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:12.435 { 00:09:12.435 "nbd_device": "/dev/nbd0", 00:09:12.435 "bdev_name": "Malloc0" 00:09:12.435 }, 00:09:12.435 { 00:09:12.435 "nbd_device": "/dev/nbd1", 00:09:12.435 "bdev_name": "Malloc1" 00:09:12.435 } 00:09:12.435 ]' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:12.435 /dev/nbd1' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:12.435 /dev/nbd1' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:12.435 256+0 records in 00:09:12.435 256+0 records out 00:09:12.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764799 s, 137 MB/s 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.435 256+0 records in 00:09:12.435 256+0 records out 00:09:12.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284118 s, 36.9 MB/s 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.435 17:00:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:12.693 256+0 records in 00:09:12.693 256+0 records out 00:09:12.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0374075 s, 28.0 MB/s 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.693 17:00:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.950 17:00:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.207 17:00:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:13.465 17:00:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:13.465 17:00:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:14.031 17:00:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:15.405 [2024-07-25 17:00:07.720204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.663 [2024-07-25 17:00:07.996409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.663 [2024-07-25 17:00:07.996410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.922 [2024-07-25 17:00:08.227078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:15.922 [2024-07-25 17:00:08.227224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:17.292 spdk_app_start Round 2 00:09:17.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:17.292 17:00:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:17.292 17:00:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:17.292 17:00:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63730 /var/tmp/spdk-nbd.sock 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63730 ']' 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.292 17:00:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:17.292 17:00:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:17.549 Malloc0 00:09:17.549 17:00:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:17.806 Malloc1 00:09:18.064 17:00:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:18.064 17:00:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:18.322 /dev/nbd0 00:09:18.322 17:00:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:18.322 17:00:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:18.322 1+0 records in 00:09:18.322 1+0 records out 00:09:18.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343807 s, 11.9 MB/s 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.322 17:00:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:18.322 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.322 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:18.322 17:00:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:18.580 /dev/nbd1 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:18.580 1+0 records in 00:09:18.580 1+0 records out 00:09:18.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360094 s, 11.4 MB/s 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.580 17:00:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.580 17:00:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.838 17:00:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:18.838 { 00:09:18.838 "nbd_device": "/dev/nbd0", 00:09:18.838 "bdev_name": "Malloc0" 00:09:18.838 }, 00:09:18.838 { 00:09:18.838 "nbd_device": "/dev/nbd1", 00:09:18.838 "bdev_name": "Malloc1" 00:09:18.838 } 00:09:18.838 ]' 00:09:18.838 17:00:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:18.838 { 00:09:18.838 "nbd_device": "/dev/nbd0", 00:09:18.838 "bdev_name": "Malloc0" 00:09:18.838 }, 00:09:18.838 { 00:09:18.838 "nbd_device": "/dev/nbd1", 00:09:18.838 "bdev_name": "Malloc1" 00:09:18.838 } 00:09:18.838 ]' 00:09:18.838 17:00:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:19.095 /dev/nbd1' 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:19.095 /dev/nbd1' 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:19.095 256+0 records in 00:09:19.095 256+0 records out 00:09:19.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723974 s, 145 MB/s 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:19.095 256+0 records in 00:09:19.095 256+0 records out 00:09:19.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376904 s, 27.8 MB/s 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:19.095 256+0 records in 00:09:19.095 256+0 records out 00:09:19.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0391437 s, 26.8 MB/s 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:19.095 17:00:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.096 17:00:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.354 17:00:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.611 17:00:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.869 17:00:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:19.869 17:00:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:19.869 17:00:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:20.130 17:00:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:20.130 17:00:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:20.696 17:00:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:22.598 [2024-07-25 17:00:14.587404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.598 [2024-07-25 17:00:14.917441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.598 [2024-07-25 17:00:14.917452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.855 [2024-07-25 17:00:15.189858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:22.855 [2024-07-25 17:00:15.190025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:23.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:23.788 17:00:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63730 /var/tmp/spdk-nbd.sock 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63730 ']' 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.788 17:00:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:23.788 17:00:16 event.app_repeat -- event/event.sh@39 -- # killprocess 63730 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63730 ']' 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63730 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63730 00:09:23.788 killing process with pid 63730 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63730' 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63730 00:09:23.788 17:00:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63730 00:09:25.685 spdk_app_start is called in Round 0. 00:09:25.685 Shutdown signal received, stop current app iteration 00:09:25.685 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:09:25.685 spdk_app_start is called in Round 1. 00:09:25.685 Shutdown signal received, stop current app iteration 00:09:25.685 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:09:25.685 spdk_app_start is called in Round 2. 00:09:25.685 Shutdown signal received, stop current app iteration 00:09:25.685 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:09:25.685 spdk_app_start is called in Round 3. 00:09:25.685 Shutdown signal received, stop current app iteration 00:09:25.685 17:00:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:25.685 17:00:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:25.685 00:09:25.685 real 0m22.311s 00:09:25.685 user 0m46.888s 00:09:25.685 sys 0m3.692s 00:09:25.685 17:00:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.685 17:00:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 ************************************ 00:09:25.685 END TEST app_repeat 00:09:25.685 ************************************ 00:09:25.685 17:00:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:25.685 17:00:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:25.685 17:00:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.685 17:00:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.685 17:00:17 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 ************************************ 00:09:25.685 START TEST cpu_locks 00:09:25.685 ************************************ 00:09:25.685 17:00:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:25.685 * Looking for test storage... 00:09:25.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:25.685 17:00:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:25.685 17:00:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:25.685 17:00:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:25.685 17:00:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:25.685 17:00:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:25.685 17:00:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.685 17:00:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 ************************************ 00:09:25.685 START TEST default_locks 00:09:25.685 ************************************ 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64199 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64199 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64199 ']' 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:25.685 17:00:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 [2024-07-25 17:00:17.973938] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:25.685 [2024-07-25 17:00:17.974187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64199 ] 00:09:25.944 [2024-07-25 17:00:18.156301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.202 [2024-07-25 17:00:18.459483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.136 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.136 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:27.136 17:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64199 00:09:27.136 17:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64199 00:09:27.136 17:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64199 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 64199 ']' 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 64199 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64199 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.422 killing process with pid 64199 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64199' 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 64199 00:09:27.422 17:00:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 64199 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64199 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64199 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 64199 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64199 ']' 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.706 ERROR: process (pid: 64199) is no longer running 00:09:30.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64199) - No such process 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:30.706 00:09:30.706 real 0m4.607s 00:09:30.706 user 0m4.396s 00:09:30.706 sys 0m0.886s 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.706 17:00:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.706 ************************************ 00:09:30.706 END TEST default_locks 00:09:30.706 ************************************ 00:09:30.706 17:00:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:30.706 17:00:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.706 17:00:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.706 17:00:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.706 ************************************ 00:09:30.706 START TEST default_locks_via_rpc 00:09:30.706 ************************************ 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64279 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64279 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64279 ']' 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.706 17:00:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.706 [2024-07-25 17:00:22.617118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:30.706 [2024-07-25 17:00:22.617327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64279 ] 00:09:30.706 [2024-07-25 17:00:22.793712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.706 [2024-07-25 17:00:23.081857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64279 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.640 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64279 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64279 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 64279 ']' 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 64279 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64279 00:09:32.264 killing process with pid 64279 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64279' 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 64279 00:09:32.264 17:00:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 64279 00:09:34.794 ************************************ 00:09:34.794 END TEST default_locks_via_rpc 00:09:34.794 ************************************ 00:09:34.794 00:09:34.794 real 0m4.578s 00:09:34.794 user 0m4.401s 00:09:34.794 sys 0m0.913s 00:09:34.794 17:00:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.794 17:00:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.794 17:00:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:34.794 17:00:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.794 17:00:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.794 17:00:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.794 ************************************ 00:09:34.794 START TEST non_locking_app_on_locked_coremask 00:09:34.794 ************************************ 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:34.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64359 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64359 /var/tmp/spdk.sock 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64359 ']' 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.794 17:00:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:35.052 [2024-07-25 17:00:27.264952] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.052 [2024-07-25 17:00:27.265163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64359 ] 00:09:35.052 [2024-07-25 17:00:27.439041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.322 [2024-07-25 17:00:27.720060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64375 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64375 /var/tmp/spdk2.sock 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64375 ']' 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.254 17:00:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.511 [2024-07-25 17:00:28.786583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:36.511 [2024-07-25 17:00:28.787017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64375 ] 00:09:36.511 [2024-07-25 17:00:28.968625] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:36.511 [2024-07-25 17:00:28.968698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.074 [2024-07-25 17:00:29.528131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.064 17:00:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.064 17:00:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:39.064 17:00:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64359 00:09:39.064 17:00:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64359 00:09:39.064 17:00:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64359 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64359 ']' 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64359 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64359 00:09:39.996 killing process with pid 64359 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64359' 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64359 00:09:39.996 17:00:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64359 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64375 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64375 ']' 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64375 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64375 00:09:45.259 killing process with pid 64375 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64375' 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64375 00:09:45.259 17:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64375 00:09:47.789 00:09:47.789 real 0m12.942s 00:09:47.789 user 0m13.099s 00:09:47.789 sys 0m1.829s 00:09:47.789 17:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.789 17:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 ************************************ 00:09:47.789 END TEST non_locking_app_on_locked_coremask 00:09:47.789 ************************************ 00:09:47.789 17:00:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:47.789 17:00:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.789 17:00:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.789 17:00:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 ************************************ 00:09:47.789 START TEST locking_app_on_unlocked_coremask 00:09:47.789 ************************************ 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:47.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64534 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64534 /var/tmp/spdk.sock 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64534 ']' 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.789 17:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.789 [2024-07-25 17:00:40.248913] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:47.789 [2024-07-25 17:00:40.249414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64534 ] 00:09:48.048 [2024-07-25 17:00:40.429444] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:48.048 [2024-07-25 17:00:40.429807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.306 [2024-07-25 17:00:40.708683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64561 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64561 /var/tmp/spdk2.sock 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64561 ']' 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.240 17:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.498 [2024-07-25 17:00:41.756469] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:49.498 [2024-07-25 17:00:41.757537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64561 ] 00:09:49.498 [2024-07-25 17:00:41.950344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.433 [2024-07-25 17:00:42.541619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.331 17:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.331 17:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:52.331 17:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64561 00:09:52.331 17:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64561 00:09:52.331 17:00:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64534 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64534 ']' 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64534 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64534 00:09:53.267 killing process with pid 64534 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64534' 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64534 00:09:53.267 17:00:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64534 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64561 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64561 ']' 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64561 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64561 00:09:58.531 killing process with pid 64561 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64561' 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64561 00:09:58.531 17:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64561 00:10:01.060 ************************************ 00:10:01.060 END TEST locking_app_on_unlocked_coremask 00:10:01.060 ************************************ 00:10:01.060 00:10:01.060 real 0m12.875s 00:10:01.060 user 0m13.195s 00:10:01.060 sys 0m1.827s 00:10:01.060 17:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.060 17:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.060 17:00:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:01.060 17:00:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.060 17:00:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.060 17:00:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.060 ************************************ 00:10:01.060 START TEST locking_app_on_locked_coremask 00:10:01.060 ************************************ 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64715 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64715 /var/tmp/spdk.sock 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64715 ']' 00:10:01.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.060 17:00:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.060 [2024-07-25 17:00:53.175535] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:01.060 [2024-07-25 17:00:53.175734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64715 ] 00:10:01.060 [2024-07-25 17:00:53.355220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.318 [2024-07-25 17:00:53.633190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64742 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64742 /var/tmp/spdk2.sock 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64742 /var/tmp/spdk2.sock 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64742 /var/tmp/spdk2.sock 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64742 ']' 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.254 17:00:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.254 [2024-07-25 17:00:54.658717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:02.254 [2024-07-25 17:00:54.660485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64742 ] 00:10:02.513 [2024-07-25 17:00:54.845360] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64715 has claimed it. 00:10:02.513 [2024-07-25 17:00:54.845450] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:03.079 ERROR: process (pid: 64742) is no longer running 00:10:03.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64742) - No such process 00:10:03.079 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.079 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64715 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64715 00:10:03.080 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64715 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64715 ']' 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64715 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.340 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64715 00:10:03.598 killing process with pid 64715 00:10:03.598 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.598 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.598 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64715' 00:10:03.598 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64715 00:10:03.598 17:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64715 00:10:06.126 00:10:06.126 real 0m5.238s 00:10:06.126 user 0m5.406s 00:10:06.126 sys 0m1.043s 00:10:06.126 17:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.126 ************************************ 00:10:06.126 END TEST locking_app_on_locked_coremask 00:10:06.126 ************************************ 00:10:06.126 17:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.126 17:00:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:06.126 17:00:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.126 17:00:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.126 17:00:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:06.126 ************************************ 00:10:06.126 START TEST locking_overlapped_coremask 00:10:06.126 ************************************ 00:10:06.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64807 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64807 /var/tmp/spdk.sock 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64807 ']' 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.126 17:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.126 [2024-07-25 17:00:58.480495] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:06.126 [2024-07-25 17:00:58.480897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64807 ] 00:10:06.384 [2024-07-25 17:00:58.659053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.643 [2024-07-25 17:00:58.952142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.643 [2024-07-25 17:00:58.952266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.643 [2024-07-25 17:00:58.952290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.577 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.577 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:07.577 17:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:07.577 17:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64836 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64836 /var/tmp/spdk2.sock 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64836 /var/tmp/spdk2.sock 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:07.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64836 /var/tmp/spdk2.sock 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64836 ']' 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.578 17:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.578 [2024-07-25 17:01:00.043688] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:07.578 [2024-07-25 17:01:00.043879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64836 ] 00:10:07.836 [2024-07-25 17:01:00.225453] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64807 has claimed it. 00:10:07.836 [2024-07-25 17:01:00.225576] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:08.403 ERROR: process (pid: 64836) is no longer running 00:10:08.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64836) - No such process 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64807 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64807 ']' 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64807 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64807 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64807' 00:10:08.403 killing process with pid 64807 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64807 00:10:08.403 17:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64807 00:10:10.934 00:10:10.934 real 0m4.998s 00:10:10.934 user 0m12.850s 00:10:10.935 sys 0m0.849s 00:10:10.935 17:01:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.935 17:01:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.935 ************************************ 00:10:10.935 END TEST locking_overlapped_coremask 00:10:10.935 ************************************ 00:10:10.935 17:01:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:10.935 17:01:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:10.935 17:01:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.935 17:01:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.193 ************************************ 00:10:11.193 START TEST locking_overlapped_coremask_via_rpc 00:10:11.193 ************************************ 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:11.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64900 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64900 /var/tmp/spdk.sock 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64900 ']' 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.193 17:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.193 [2024-07-25 17:01:03.538546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:11.193 [2024-07-25 17:01:03.538740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64900 ] 00:10:11.451 [2024-07-25 17:01:03.705769] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:11.451 [2024-07-25 17:01:03.705832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.709 [2024-07-25 17:01:03.981151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.709 [2024-07-25 17:01:03.981299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.709 [2024-07-25 17:01:03.981335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.643 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64929 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64929 /var/tmp/spdk2.sock 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64929 ']' 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.644 17:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.644 [2024-07-25 17:01:05.042923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:12.644 [2024-07-25 17:01:05.044563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64929 ] 00:10:12.902 [2024-07-25 17:01:05.234151] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:12.902 [2024-07-25 17:01:05.234244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.468 [2024-07-25 17:01:05.759214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.468 [2024-07-25 17:01:05.759325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.468 [2024-07-25 17:01:05.759340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.369 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.369 [2024-07-25 17:01:07.827262] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64900 has claimed it. 00:10:15.627 request: 00:10:15.627 { 00:10:15.627 "method": "framework_enable_cpumask_locks", 00:10:15.627 "req_id": 1 00:10:15.627 } 00:10:15.627 Got JSON-RPC error response 00:10:15.627 response: 00:10:15.627 { 00:10:15.627 "code": -32603, 00:10:15.627 "message": "Failed to claim CPU core: 2" 00:10:15.627 } 00:10:15.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64900 /var/tmp/spdk.sock 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64900 ']' 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.627 17:01:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64929 /var/tmp/spdk2.sock 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64929 ']' 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.886 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:16.145 ************************************ 00:10:16.145 END TEST locking_overlapped_coremask_via_rpc 00:10:16.145 ************************************ 00:10:16.145 00:10:16.145 real 0m5.046s 00:10:16.145 user 0m1.747s 00:10:16.145 sys 0m0.278s 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.145 17:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.145 17:01:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:16.145 17:01:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64900 ]] 00:10:16.145 17:01:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64900 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64900 ']' 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64900 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64900 00:10:16.145 killing process with pid 64900 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64900' 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64900 00:10:16.145 17:01:08 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64900 00:10:19.430 17:01:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64929 ]] 00:10:19.430 17:01:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64929 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64929 ']' 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64929 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64929 00:10:19.430 killing process with pid 64929 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64929' 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64929 00:10:19.430 17:01:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64929 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64900 ]] 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64900 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64900 ']' 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64900 00:10:21.340 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64900) - No such process 00:10:21.340 Process with pid 64900 is not found 00:10:21.340 Process with pid 64929 is not found 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64900 is not found' 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64929 ]] 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64929 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64929 ']' 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64929 00:10:21.340 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64929) - No such process 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64929 is not found' 00:10:21.340 17:01:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:21.340 ************************************ 00:10:21.340 END TEST cpu_locks 00:10:21.340 ************************************ 00:10:21.340 00:10:21.340 real 0m55.817s 00:10:21.340 user 1m32.712s 00:10:21.340 sys 0m8.977s 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.340 17:01:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:21.340 ************************************ 00:10:21.340 END TEST event 00:10:21.340 ************************************ 00:10:21.340 00:10:21.340 real 1m30.243s 00:10:21.340 user 2m37.610s 00:10:21.340 sys 0m13.884s 00:10:21.340 17:01:13 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.340 17:01:13 event -- common/autotest_common.sh@10 -- # set +x 00:10:21.340 17:01:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:21.340 17:01:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.340 17:01:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.340 17:01:13 -- common/autotest_common.sh@10 -- # set +x 00:10:21.340 ************************************ 00:10:21.340 START TEST thread 00:10:21.340 ************************************ 00:10:21.341 17:01:13 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:21.341 * Looking for test storage... 00:10:21.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:21.341 17:01:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:21.341 17:01:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:21.341 17:01:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.341 17:01:13 thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.341 ************************************ 00:10:21.341 START TEST thread_poller_perf 00:10:21.341 ************************************ 00:10:21.341 17:01:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:21.341 [2024-07-25 17:01:13.782098] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:21.341 [2024-07-25 17:01:13.782250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65116 ] 00:10:21.600 [2024-07-25 17:01:13.950462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.858 [2024-07-25 17:01:14.242173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.858 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:23.234 ====================================== 00:10:23.234 busy:2210261682 (cyc) 00:10:23.234 total_run_count: 293000 00:10:23.234 tsc_hz: 2200000000 (cyc) 00:10:23.234 ====================================== 00:10:23.234 poller_cost: 7543 (cyc), 3428 (nsec) 00:10:23.234 00:10:23.234 real 0m1.915s 00:10:23.234 user 0m1.681s 00:10:23.234 sys 0m0.123s 00:10:23.234 17:01:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.234 ************************************ 00:10:23.234 END TEST thread_poller_perf 00:10:23.234 ************************************ 00:10:23.234 17:01:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.493 17:01:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:23.493 17:01:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:23.493 17:01:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.493 17:01:15 thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.493 ************************************ 00:10:23.493 START TEST thread_poller_perf 00:10:23.493 ************************************ 00:10:23.493 17:01:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:23.493 [2024-07-25 17:01:15.763244] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:23.493 [2024-07-25 17:01:15.763513] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65158 ] 00:10:23.493 [2024-07-25 17:01:15.935543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.752 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:23.752 [2024-07-25 17:01:16.180668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.128 ====================================== 00:10:25.128 busy:2204340622 (cyc) 00:10:25.128 total_run_count: 3818000 00:10:25.128 tsc_hz: 2200000000 (cyc) 00:10:25.128 ====================================== 00:10:25.128 poller_cost: 577 (cyc), 262 (nsec) 00:10:25.386 00:10:25.386 real 0m1.883s 00:10:25.386 user 0m1.658s 00:10:25.386 sys 0m0.115s 00:10:25.386 17:01:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.386 ************************************ 00:10:25.386 END TEST thread_poller_perf 00:10:25.386 17:01:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:25.386 ************************************ 00:10:25.386 17:01:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:25.386 00:10:25.386 real 0m3.991s 00:10:25.386 user 0m3.412s 00:10:25.386 sys 0m0.348s 00:10:25.386 17:01:17 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.386 ************************************ 00:10:25.386 END TEST thread 00:10:25.386 ************************************ 00:10:25.386 17:01:17 thread -- common/autotest_common.sh@10 -- # set +x 00:10:25.386 17:01:17 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:10:25.386 17:01:17 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:25.386 17:01:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.386 17:01:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.386 17:01:17 -- common/autotest_common.sh@10 -- # set +x 00:10:25.386 ************************************ 00:10:25.386 START TEST app_cmdline 00:10:25.386 ************************************ 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:25.386 * Looking for test storage... 00:10:25.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:25.386 17:01:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.386 17:01:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65239 00:10:25.386 17:01:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:25.386 17:01:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65239 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 65239 ']' 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.386 17:01:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:25.645 [2024-07-25 17:01:17.907592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:25.645 [2024-07-25 17:01:17.908436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65239 ] 00:10:25.645 [2024-07-25 17:01:18.088330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.212 [2024-07-25 17:01:18.375192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.779 17:01:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.779 17:01:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:26.779 17:01:19 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:27.039 { 00:10:27.039 "version": "SPDK v24.09-pre git sha1 704257090", 00:10:27.039 "fields": { 00:10:27.039 "major": 24, 00:10:27.039 "minor": 9, 00:10:27.039 "patch": 0, 00:10:27.039 "suffix": "-pre", 00:10:27.039 "commit": "704257090" 00:10:27.039 } 00:10:27.039 } 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:27.039 17:01:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.039 17:01:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:27.039 17:01:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:27.039 17:01:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.298 17:01:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:27.298 17:01:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:27.298 17:01:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:27.298 17:01:19 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:27.298 request: 00:10:27.298 { 00:10:27.298 "method": "env_dpdk_get_mem_stats", 00:10:27.298 "req_id": 1 00:10:27.298 } 00:10:27.298 Got JSON-RPC error response 00:10:27.298 response: 00:10:27.298 { 00:10:27.298 "code": -32601, 00:10:27.298 "message": "Method not found" 00:10:27.298 } 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.557 17:01:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65239 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 65239 ']' 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 65239 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65239 00:10:27.557 killing process with pid 65239 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65239' 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@969 -- # kill 65239 00:10:27.557 17:01:19 app_cmdline -- common/autotest_common.sh@974 -- # wait 65239 00:10:30.090 ************************************ 00:10:30.090 END TEST app_cmdline 00:10:30.090 ************************************ 00:10:30.090 00:10:30.090 real 0m4.414s 00:10:30.090 user 0m4.716s 00:10:30.090 sys 0m0.626s 00:10:30.090 17:01:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.090 17:01:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 17:01:22 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:30.090 17:01:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:30.090 17:01:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.090 17:01:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 ************************************ 00:10:30.090 START TEST version 00:10:30.090 ************************************ 00:10:30.090 17:01:22 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:30.090 * Looking for test storage... 00:10:30.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:30.090 17:01:22 version -- app/version.sh@17 -- # get_header_version major 00:10:30.090 17:01:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # cut -f2 00:10:30.090 17:01:22 version -- app/version.sh@17 -- # major=24 00:10:30.090 17:01:22 version -- app/version.sh@18 -- # get_header_version minor 00:10:30.090 17:01:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # cut -f2 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.090 17:01:22 version -- app/version.sh@18 -- # minor=9 00:10:30.090 17:01:22 version -- app/version.sh@19 -- # get_header_version patch 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # cut -f2 00:10:30.090 17:01:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.090 17:01:22 version -- app/version.sh@19 -- # patch=0 00:10:30.090 17:01:22 version -- app/version.sh@20 -- # get_header_version suffix 00:10:30.090 17:01:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # cut -f2 00:10:30.090 17:01:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.090 17:01:22 version -- app/version.sh@20 -- # suffix=-pre 00:10:30.090 17:01:22 version -- app/version.sh@22 -- # version=24.9 00:10:30.090 17:01:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:30.090 17:01:22 version -- app/version.sh@28 -- # version=24.9rc0 00:10:30.090 17:01:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:30.090 17:01:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:30.090 17:01:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:30.090 17:01:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:30.090 00:10:30.090 real 0m0.151s 00:10:30.090 user 0m0.078s 00:10:30.090 sys 0m0.104s 00:10:30.090 ************************************ 00:10:30.090 END TEST version 00:10:30.090 ************************************ 00:10:30.090 17:01:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.090 17:01:22 version -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 17:01:22 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:10:30.090 17:01:22 -- spdk/autotest.sh@202 -- # uname -s 00:10:30.090 17:01:22 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:10:30.090 17:01:22 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:10:30.090 17:01:22 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:10:30.090 17:01:22 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:10:30.090 17:01:22 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:30.090 17:01:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.090 17:01:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.090 17:01:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.090 ************************************ 00:10:30.090 START TEST blockdev_nvme 00:10:30.090 ************************************ 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:30.090 * Looking for test storage... 00:10:30.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:30.090 17:01:22 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65412 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:30.090 17:01:22 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65412 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 65412 ']' 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.090 17:01:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.349 [2024-07-25 17:01:22.606735] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:30.349 [2024-07-25 17:01:22.606961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65412 ] 00:10:30.349 [2024-07-25 17:01:22.784556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.607 [2024-07-25 17:01:23.048134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.540 17:01:23 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.540 17:01:23 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:31.540 17:01:23 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:31.540 17:01:23 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.540 17:01:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.799 17:01:24 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.799 17:01:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:10:31.799 17:01:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.799 17:01:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.799 17:01:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.058 17:01:24 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:32.058 17:01:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:32.059 17:01:24 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e41811dc-fb80-4e9a-bcb2-fb116e0a7a50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e41811dc-fb80-4e9a-bcb2-fb116e0a7a50",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1ca86afe-e73a-4c09-b98b-1a80b423be6b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1ca86afe-e73a-4c09-b98b-1a80b423be6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b8d633d2-bfee-46a5-a72f-9f7f984ef199"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8d633d2-bfee-46a5-a72f-9f7f984ef199",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "dd3507b1-62e9-4bca-b78c-9d954e18c7b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dd3507b1-62e9-4bca-b78c-9d954e18c7b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "267505f2-f721-458c-a62c-f1959cad3a8c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "267505f2-f721-458c-a62c-f1959cad3a8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "91add2f3-2b67-44ec-b744-70f49ea74c40"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "91add2f3-2b67-44ec-b744-70f49ea74c40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:32.059 17:01:24 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:32.059 17:01:24 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:32.059 17:01:24 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:32.059 17:01:24 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 65412 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 65412 ']' 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 65412 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65412 00:10:32.059 killing process with pid 65412 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65412' 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 65412 00:10:32.059 17:01:24 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 65412 00:10:34.598 17:01:26 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:34.598 17:01:26 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:34.598 17:01:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:34.598 17:01:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.598 17:01:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.598 ************************************ 00:10:34.598 START TEST bdev_hello_world 00:10:34.598 ************************************ 00:10:34.598 17:01:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:34.598 [2024-07-25 17:01:26.848673] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:34.598 [2024-07-25 17:01:26.848863] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65507 ] 00:10:34.598 [2024-07-25 17:01:27.022100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.856 [2024-07-25 17:01:27.273287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.791 [2024-07-25 17:01:27.932858] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:35.791 [2024-07-25 17:01:27.932937] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:35.791 [2024-07-25 17:01:27.932970] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:35.791 [2024-07-25 17:01:27.936077] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:35.791 [2024-07-25 17:01:27.936526] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:35.791 [2024-07-25 17:01:27.936558] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:35.791 [2024-07-25 17:01:27.936850] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:35.791 00:10:35.791 [2024-07-25 17:01:27.936889] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:36.724 00:10:36.724 real 0m2.377s 00:10:36.724 user 0m1.982s 00:10:36.724 sys 0m0.283s 00:10:36.724 17:01:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.724 ************************************ 00:10:36.724 END TEST bdev_hello_world 00:10:36.724 17:01:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:36.724 ************************************ 00:10:36.724 17:01:29 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:36.724 17:01:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.724 17:01:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.724 17:01:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.724 ************************************ 00:10:36.724 START TEST bdev_bounds 00:10:36.724 ************************************ 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:10:36.724 Process bdevio pid: 65549 00:10:36.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65549 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65549' 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65549 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65549 ']' 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.724 17:01:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:36.982 [2024-07-25 17:01:29.279517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:36.983 [2024-07-25 17:01:29.279704] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65549 ] 00:10:37.241 [2024-07-25 17:01:29.460359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.499 [2024-07-25 17:01:29.743242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.499 [2024-07-25 17:01:29.743398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.499 [2024-07-25 17:01:29.743410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.064 17:01:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.064 17:01:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:10:38.064 17:01:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:38.322 I/O targets: 00:10:38.322 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:38.322 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:38.322 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:38.322 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:38.322 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:38.322 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:38.322 00:10:38.322 00:10:38.322 CUnit - A unit testing framework for C - Version 2.1-3 00:10:38.322 http://cunit.sourceforge.net/ 00:10:38.322 00:10:38.322 00:10:38.322 Suite: bdevio tests on: Nvme3n1 00:10:38.322 Test: blockdev write read block ...passed 00:10:38.322 Test: blockdev write zeroes read block ...passed 00:10:38.322 Test: blockdev write zeroes read no split ...passed 00:10:38.322 Test: blockdev write zeroes read split ...passed 00:10:38.322 Test: blockdev write zeroes read split partial ...passed 00:10:38.322 Test: blockdev reset ...[2024-07-25 17:01:30.646495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:38.322 [2024-07-25 17:01:30.650424] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.322 passed 00:10:38.322 Test: blockdev write read 8 blocks ...passed 00:10:38.322 Test: blockdev write read size > 128k ...passed 00:10:38.322 Test: blockdev write read invalid size ...passed 00:10:38.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.322 Test: blockdev write read max offset ...passed 00:10:38.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.322 Test: blockdev writev readv 8 blocks ...passed 00:10:38.322 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.322 Test: blockdev writev readv block ...passed 00:10:38.322 Test: blockdev writev readv size > 128k ...passed 00:10:38.322 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.322 Test: blockdev comparev and writev ...[2024-07-25 17:01:30.660484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27120a000 len:0x1000 00:10:38.322 [2024-07-25 17:01:30.660551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:38.322 passed 00:10:38.322 Test: blockdev nvme passthru rw ...passed 00:10:38.322 Test: blockdev nvme passthru vendor specific ...passed 00:10:38.322 Test: blockdev nvme admin passthru ...[2024-07-25 17:01:30.661470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:38.322 [2024-07-25 17:01:30.661522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:38.322 passed 00:10:38.322 Test: blockdev copy ...passed 00:10:38.322 Suite: bdevio tests on: Nvme2n3 00:10:38.322 Test: blockdev write read block ...passed 00:10:38.322 Test: blockdev write zeroes read block ...passed 00:10:38.322 Test: blockdev write zeroes read no split ...passed 00:10:38.322 Test: blockdev write zeroes read split ...passed 00:10:38.322 Test: blockdev write zeroes read split partial ...passed 00:10:38.323 Test: blockdev reset ...[2024-07-25 17:01:30.727239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:38.323 [2024-07-25 17:01:30.731738] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.323 passed 00:10:38.323 Test: blockdev write read 8 blocks ...passed 00:10:38.323 Test: blockdev write read size > 128k ...passed 00:10:38.323 Test: blockdev write read invalid size ...passed 00:10:38.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.323 Test: blockdev write read max offset ...passed 00:10:38.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.323 Test: blockdev writev readv 8 blocks ...passed 00:10:38.323 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.323 Test: blockdev writev readv block ...passed 00:10:38.323 Test: blockdev writev readv size > 128k ...passed 00:10:38.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.323 Test: blockdev comparev and writev ...[2024-07-25 17:01:30.739854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x253004000 len:0x1000 00:10:38.323 [2024-07-25 17:01:30.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:38.323 passed 00:10:38.323 Test: blockdev nvme passthru rw ...passed 00:10:38.323 Test: blockdev nvme passthru vendor specific ...passed 00:10:38.323 Test: blockdev nvme admin passthru ...[2024-07-25 17:01:30.740884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:38.323 [2024-07-25 17:01:30.740935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:38.323 passed 00:10:38.323 Test: blockdev copy ...passed 00:10:38.323 Suite: bdevio tests on: Nvme2n2 00:10:38.323 Test: blockdev write read block ...passed 00:10:38.323 Test: blockdev write zeroes read block ...passed 00:10:38.323 Test: blockdev write zeroes read no split ...passed 00:10:38.323 Test: blockdev write zeroes read split ...passed 00:10:38.581 Test: blockdev write zeroes read split partial ...passed 00:10:38.581 Test: blockdev reset ...[2024-07-25 17:01:30.808950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:38.581 [2024-07-25 17:01:30.813453] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.581 passed 00:10:38.581 Test: blockdev write read 8 blocks ...passed 00:10:38.581 Test: blockdev write read size > 128k ...passed 00:10:38.581 Test: blockdev write read invalid size ...passed 00:10:38.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.581 Test: blockdev write read max offset ...passed 00:10:38.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.581 Test: blockdev writev readv 8 blocks ...passed 00:10:38.581 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.581 Test: blockdev writev readv block ...passed 00:10:38.581 Test: blockdev writev readv size > 128k ...passed 00:10:38.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.581 Test: blockdev comparev and writev ...[2024-07-25 17:01:30.821662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28323a000 len:0x1000 00:10:38.581 [2024-07-25 17:01:30.821727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:38.581 passed 00:10:38.581 Test: blockdev nvme passthru rw ...passed 00:10:38.581 Test: blockdev nvme passthru vendor specific ...passed 00:10:38.581 Test: blockdev nvme admin passthru ...[2024-07-25 17:01:30.822499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:38.581 [2024-07-25 17:01:30.822544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:38.581 passed 00:10:38.581 Test: blockdev copy ...passed 00:10:38.581 Suite: bdevio tests on: Nvme2n1 00:10:38.581 Test: blockdev write read block ...passed 00:10:38.581 Test: blockdev write zeroes read block ...passed 00:10:38.581 Test: blockdev write zeroes read no split ...passed 00:10:38.581 Test: blockdev write zeroes read split ...passed 00:10:38.581 Test: blockdev write zeroes read split partial ...passed 00:10:38.581 Test: blockdev reset ...[2024-07-25 17:01:30.909298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:38.581 [2024-07-25 17:01:30.913608] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.581 passed 00:10:38.581 Test: blockdev write read 8 blocks ...passed 00:10:38.581 Test: blockdev write read size > 128k ...passed 00:10:38.581 Test: blockdev write read invalid size ...passed 00:10:38.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.581 Test: blockdev write read max offset ...passed 00:10:38.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.581 Test: blockdev writev readv 8 blocks ...passed 00:10:38.581 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.581 Test: blockdev writev readv block ...passed 00:10:38.581 Test: blockdev writev readv size > 128k ...passed 00:10:38.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.581 Test: blockdev comparev and writev ...[2024-07-25 17:01:30.921184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283234000 len:0x1000 00:10:38.581 [2024-07-25 17:01:30.921247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:38.581 passed 00:10:38.581 Test: blockdev nvme passthru rw ...passed 00:10:38.581 Test: blockdev nvme passthru vendor specific ...[2024-07-25 17:01:30.922016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:38.582 [2024-07-25 17:01:30.922057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:38.582 passed 00:10:38.582 Test: blockdev nvme admin passthru ...passed 00:10:38.582 Test: blockdev copy ...passed 00:10:38.582 Suite: bdevio tests on: Nvme1n1 00:10:38.582 Test: blockdev write read block ...passed 00:10:38.582 Test: blockdev write zeroes read block ...passed 00:10:38.582 Test: blockdev write zeroes read no split ...passed 00:10:38.582 Test: blockdev write zeroes read split ...passed 00:10:38.582 Test: blockdev write zeroes read split partial ...passed 00:10:38.582 Test: blockdev reset ...[2024-07-25 17:01:30.989210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:38.582 [2024-07-25 17:01:30.993047] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.582 passed 00:10:38.582 Test: blockdev write read 8 blocks ...passed 00:10:38.582 Test: blockdev write read size > 128k ...passed 00:10:38.582 Test: blockdev write read invalid size ...passed 00:10:38.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.582 Test: blockdev write read max offset ...passed 00:10:38.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.582 Test: blockdev writev readv 8 blocks ...passed 00:10:38.582 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.582 Test: blockdev writev readv block ...passed 00:10:38.582 Test: blockdev writev readv size > 128k ...passed 00:10:38.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.582 Test: blockdev comparev and writev ...[2024-07-25 17:01:31.001070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283230000 len:0x1000 00:10:38.582 [2024-07-25 17:01:31.001137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:38.582 passed 00:10:38.582 Test: blockdev nvme passthru rw ...passed 00:10:38.582 Test: blockdev nvme passthru vendor specific ...[2024-07-25 17:01:31.001818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:38.582 [2024-07-25 17:01:31.001856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:38.582 passed 00:10:38.582 Test: blockdev nvme admin passthru ...passed 00:10:38.582 Test: blockdev copy ...passed 00:10:38.582 Suite: bdevio tests on: Nvme0n1 00:10:38.582 Test: blockdev write read block ...passed 00:10:38.582 Test: blockdev write zeroes read block ...passed 00:10:38.582 Test: blockdev write zeroes read no split ...passed 00:10:38.582 Test: blockdev write zeroes read split ...passed 00:10:38.840 Test: blockdev write zeroes read split partial ...passed 00:10:38.840 Test: blockdev reset ...[2024-07-25 17:01:31.070461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:38.840 [2024-07-25 17:01:31.074343] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.840 passed 00:10:38.840 Test: blockdev write read 8 blocks ...passed 00:10:38.840 Test: blockdev write read size > 128k ...passed 00:10:38.840 Test: blockdev write read invalid size ...passed 00:10:38.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:38.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:38.840 Test: blockdev write read max offset ...passed 00:10:38.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:38.840 Test: blockdev writev readv 8 blocks ...passed 00:10:38.840 Test: blockdev writev readv 30 x 1block ...passed 00:10:38.840 Test: blockdev writev readv block ...passed 00:10:38.840 Test: blockdev writev readv size > 128k ...passed 00:10:38.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:38.840 Test: blockdev comparev and writev ...passed 00:10:38.840 Test: blockdev nvme passthru rw ...[2024-07-25 17:01:31.081652] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:38.840 separate metadata which is not supported yet. 00:10:38.840 passed 00:10:38.840 Test: blockdev nvme passthru vendor specific ...passed 00:10:38.840 Test: blockdev nvme admin passthru ...[2024-07-25 17:01:31.082129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:38.840 [2024-07-25 17:01:31.082184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:38.840 passed 00:10:38.840 Test: blockdev copy ...passed 00:10:38.840 00:10:38.840 Run Summary: Type Total Ran Passed Failed Inactive 00:10:38.840 suites 6 6 n/a 0 0 00:10:38.840 tests 138 138 138 0 0 00:10:38.840 asserts 893 893 893 0 n/a 00:10:38.840 00:10:38.840 Elapsed time = 1.378 seconds 00:10:38.840 0 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65549 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65549 ']' 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65549 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65549 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65549' 00:10:38.840 killing process with pid 65549 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65549 00:10:38.840 17:01:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65549 00:10:39.771 17:01:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:39.771 00:10:39.771 real 0m2.998s 00:10:39.771 user 0m7.248s 00:10:39.771 sys 0m0.421s 00:10:39.772 17:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.772 17:01:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:39.772 ************************************ 00:10:39.772 END TEST bdev_bounds 00:10:39.772 ************************************ 00:10:39.772 17:01:32 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:39.772 17:01:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:39.772 17:01:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.772 17:01:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.772 ************************************ 00:10:39.772 START TEST bdev_nbd 00:10:39.772 ************************************ 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65614 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65614 /var/tmp/spdk-nbd.sock 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65614 ']' 00:10:39.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.772 17:01:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:40.030 [2024-07-25 17:01:32.331281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:40.030 [2024-07-25 17:01:32.331442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.288 [2024-07-25 17:01:32.499395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.288 [2024-07-25 17:01:32.749371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:41.223 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.499 1+0 records in 00:10:41.499 1+0 records out 00:10:41.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693538 s, 5.9 MB/s 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:41.499 17:01:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.763 1+0 records in 00:10:41.763 1+0 records out 00:10:41.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662982 s, 6.2 MB/s 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.763 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.764 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:41.764 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:41.764 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:41.764 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.022 1+0 records in 00:10:42.022 1+0 records out 00:10:42.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653387 s, 6.3 MB/s 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:42.022 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.280 1+0 records in 00:10:42.280 1+0 records out 00:10:42.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636372 s, 6.4 MB/s 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:42.280 17:01:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:42.846 1+0 records in 00:10:42.846 1+0 records out 00:10:42.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00234651 s, 1.7 MB/s 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:42.846 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:43.106 1+0 records in 00:10:43.106 1+0 records out 00:10:43.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000852562 s, 4.8 MB/s 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:43.106 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.364 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd0", 00:10:43.364 "bdev_name": "Nvme0n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd1", 00:10:43.364 "bdev_name": "Nvme1n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd2", 00:10:43.364 "bdev_name": "Nvme2n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd3", 00:10:43.364 "bdev_name": "Nvme2n2" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd4", 00:10:43.364 "bdev_name": "Nvme2n3" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd5", 00:10:43.364 "bdev_name": "Nvme3n1" 00:10:43.364 } 00:10:43.364 ]' 00:10:43.364 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:43.364 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:43.364 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd0", 00:10:43.364 "bdev_name": "Nvme0n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd1", 00:10:43.364 "bdev_name": "Nvme1n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd2", 00:10:43.364 "bdev_name": "Nvme2n1" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd3", 00:10:43.364 "bdev_name": "Nvme2n2" 00:10:43.364 }, 00:10:43.364 { 00:10:43.364 "nbd_device": "/dev/nbd4", 00:10:43.364 "bdev_name": "Nvme2n3" 00:10:43.365 }, 00:10:43.365 { 00:10:43.365 "nbd_device": "/dev/nbd5", 00:10:43.365 "bdev_name": "Nvme3n1" 00:10:43.365 } 00:10:43.365 ]' 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.365 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.623 17:01:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.881 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.139 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.397 17:01:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:44.655 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.914 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:45.172 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:45.430 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:45.430 /dev/nbd0 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:45.724 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.725 1+0 records in 00:10:45.725 1+0 records out 00:10:45.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672316 s, 6.1 MB/s 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:45.725 17:01:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:45.994 /dev/nbd1 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:45.994 1+0 records in 00:10:45.994 1+0 records out 00:10:45.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804223 s, 5.1 MB/s 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.994 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:45.995 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:45.995 /dev/nbd10 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.254 1+0 records in 00:10:46.254 1+0 records out 00:10:46.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623568 s, 6.6 MB/s 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:46.254 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:46.513 /dev/nbd11 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.513 1+0 records in 00:10:46.513 1+0 records out 00:10:46.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752514 s, 5.4 MB/s 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:46.513 17:01:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:46.772 /dev/nbd12 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:46.772 1+0 records in 00:10:46.772 1+0 records out 00:10:46.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795402 s, 5.1 MB/s 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:46.772 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:47.031 /dev/nbd13 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:47.031 1+0 records in 00:10:47.031 1+0 records out 00:10:47.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846699 s, 4.8 MB/s 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.031 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd0", 00:10:47.290 "bdev_name": "Nvme0n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd1", 00:10:47.290 "bdev_name": "Nvme1n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd10", 00:10:47.290 "bdev_name": "Nvme2n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd11", 00:10:47.290 "bdev_name": "Nvme2n2" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd12", 00:10:47.290 "bdev_name": "Nvme2n3" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd13", 00:10:47.290 "bdev_name": "Nvme3n1" 00:10:47.290 } 00:10:47.290 ]' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd0", 00:10:47.290 "bdev_name": "Nvme0n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd1", 00:10:47.290 "bdev_name": "Nvme1n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd10", 00:10:47.290 "bdev_name": "Nvme2n1" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd11", 00:10:47.290 "bdev_name": "Nvme2n2" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd12", 00:10:47.290 "bdev_name": "Nvme2n3" 00:10:47.290 }, 00:10:47.290 { 00:10:47.290 "nbd_device": "/dev/nbd13", 00:10:47.290 "bdev_name": "Nvme3n1" 00:10:47.290 } 00:10:47.290 ]' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:47.290 /dev/nbd1 00:10:47.290 /dev/nbd10 00:10:47.290 /dev/nbd11 00:10:47.290 /dev/nbd12 00:10:47.290 /dev/nbd13' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:47.290 /dev/nbd1 00:10:47.290 /dev/nbd10 00:10:47.290 /dev/nbd11 00:10:47.290 /dev/nbd12 00:10:47.290 /dev/nbd13' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:47.290 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:47.549 256+0 records in 00:10:47.549 256+0 records out 00:10:47.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744574 s, 141 MB/s 00:10:47.549 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.549 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:47.549 256+0 records in 00:10:47.549 256+0 records out 00:10:47.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158155 s, 6.6 MB/s 00:10:47.549 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.549 17:01:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:47.808 256+0 records in 00:10:47.808 256+0 records out 00:10:47.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162308 s, 6.5 MB/s 00:10:47.808 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.808 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:47.808 256+0 records in 00:10:47.808 256+0 records out 00:10:47.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142581 s, 7.4 MB/s 00:10:47.808 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.808 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:48.067 256+0 records in 00:10:48.067 256+0 records out 00:10:48.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150235 s, 7.0 MB/s 00:10:48.067 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:48.067 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:48.326 256+0 records in 00:10:48.326 256+0 records out 00:10:48.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170127 s, 6.2 MB/s 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:48.326 256+0 records in 00:10:48.326 256+0 records out 00:10:48.326 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168094 s, 6.2 MB/s 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.326 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.585 17:01:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.843 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.102 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.360 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.361 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.361 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.619 17:01:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:49.877 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.137 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:50.499 17:01:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:50.759 malloc_lvol_verify 00:10:50.759 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:51.020 18a5198d-9e14-4ff2-83c6-9dba84916cf2 00:10:51.020 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:51.280 23a676cd-6189-4ff4-8d4c-de2f38f8f4cc 00:10:51.280 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:51.538 /dev/nbd0 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:51.538 Discarding device blocks: 0/4096mke2fs 1.46.5 (30-Dec-2021) 00:10:51.538 done 00:10:51.538 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:51.538 00:10:51.538 Allocating group tables: 0/1 done 00:10:51.538 Writing inode tables: 0/1 done 00:10:51.538 Creating journal (1024 blocks): done 00:10:51.538 Writing superblocks and filesystem accounting information: 0/1 done 00:10:51.538 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.538 17:01:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65614 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65614 ']' 00:10:51.796 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65614 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65614 00:10:52.055 killing process with pid 65614 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65614' 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65614 00:10:52.055 17:01:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65614 00:10:53.429 ************************************ 00:10:53.429 END TEST bdev_nbd 00:10:53.429 ************************************ 00:10:53.429 17:01:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:53.429 00:10:53.429 real 0m13.351s 00:10:53.429 user 0m18.771s 00:10:53.429 sys 0m4.256s 00:10:53.429 17:01:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.429 17:01:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:53.429 17:01:45 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:53.429 17:01:45 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:10:53.429 skipping fio tests on NVMe due to multi-ns failures. 00:10:53.429 17:01:45 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:53.429 17:01:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:53.429 17:01:45 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:53.429 17:01:45 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:53.429 17:01:45 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.429 17:01:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.429 ************************************ 00:10:53.429 START TEST bdev_verify 00:10:53.429 ************************************ 00:10:53.429 17:01:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:53.429 [2024-07-25 17:01:45.744846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:53.429 [2024-07-25 17:01:45.745049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66032 ] 00:10:53.688 [2024-07-25 17:01:45.921314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.945 [2024-07-25 17:01:46.171588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.945 [2024-07-25 17:01:46.171611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.512 Running I/O for 5 seconds... 00:10:59.799 00:10:59.799 Latency(us) 00:10:59.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.799 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0xbd0bd 00:10:59.799 Nvme0n1 : 5.07 1349.94 5.27 0.00 0.00 94393.10 9830.40 126782.37 00:10:59.799 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:59.799 Nvme0n1 : 5.06 1442.05 5.63 0.00 0.00 88567.03 17873.45 79119.83 00:10:59.799 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0xa0000 00:10:59.799 Nvme1n1 : 5.07 1349.38 5.27 0.00 0.00 94270.89 10009.13 118203.11 00:10:59.799 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0xa0000 length 0xa0000 00:10:59.799 Nvme1n1 : 5.06 1441.46 5.63 0.00 0.00 88439.86 17754.30 76260.07 00:10:59.799 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0x80000 00:10:59.799 Nvme2n1 : 5.08 1348.86 5.27 0.00 0.00 94051.07 9889.98 121062.87 00:10:59.799 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x80000 length 0x80000 00:10:59.799 Nvme2n1 : 5.06 1440.89 5.63 0.00 0.00 88284.68 16801.05 72447.07 00:10:59.799 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0x80000 00:10:59.799 Nvme2n2 : 5.09 1358.06 5.30 0.00 0.00 93492.38 9115.46 124875.87 00:10:59.799 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x80000 length 0x80000 00:10:59.799 Nvme2n2 : 5.07 1440.31 5.63 0.00 0.00 88163.06 16801.05 72447.07 00:10:59.799 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0x80000 00:10:59.799 Nvme2n3 : 5.09 1357.55 5.30 0.00 0.00 93357.97 9413.35 128688.87 00:10:59.799 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x80000 length 0x80000 00:10:59.799 Nvme2n3 : 5.07 1439.72 5.62 0.00 0.00 88043.82 16920.20 76260.07 00:10:59.799 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x0 length 0x20000 00:10:59.799 Nvme3n1 : 5.09 1357.05 5.30 0.00 0.00 93228.37 9711.24 129642.12 00:10:59.799 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:59.799 Verification LBA range: start 0x20000 length 0x20000 00:10:59.799 Nvme3n1 : 5.07 1439.16 5.62 0.00 0.00 87917.94 15966.95 78643.20 00:10:59.799 =================================================================================================================== 00:10:59.799 Total : 16764.43 65.49 0.00 0.00 90934.90 9115.46 129642.12 00:11:01.174 00:11:01.174 real 0m7.862s 00:11:01.174 user 0m14.225s 00:11:01.174 sys 0m0.300s 00:11:01.174 17:01:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.174 17:01:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:01.174 ************************************ 00:11:01.174 END TEST bdev_verify 00:11:01.174 ************************************ 00:11:01.174 17:01:53 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:01.174 17:01:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:01.174 17:01:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.174 17:01:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.174 ************************************ 00:11:01.174 START TEST bdev_verify_big_io 00:11:01.174 ************************************ 00:11:01.174 17:01:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:01.433 [2024-07-25 17:01:53.666616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:01.433 [2024-07-25 17:01:53.666836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66130 ] 00:11:01.433 [2024-07-25 17:01:53.846148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.691 [2024-07-25 17:01:54.086389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.691 [2024-07-25 17:01:54.086413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.626 Running I/O for 5 seconds... 00:11:09.214 00:11:09.214 Latency(us) 00:11:09.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.214 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0xbd0b 00:11:09.214 Nvme0n1 : 5.68 111.01 6.94 0.00 0.00 1127011.10 19660.80 1075267.03 00:11:09.214 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:09.214 Nvme0n1 : 5.55 172.84 10.80 0.00 0.00 716802.67 23592.96 724470.69 00:11:09.214 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0xa000 00:11:09.214 Nvme1n1 : 5.69 112.53 7.03 0.00 0.00 1081423.87 34317.03 1075267.03 00:11:09.214 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0xa000 length 0xa000 00:11:09.214 Nvme1n1 : 5.62 179.33 11.21 0.00 0.00 685220.73 26095.24 636771.61 00:11:09.214 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0x8000 00:11:09.214 Nvme2n1 : 5.69 112.48 7.03 0.00 0.00 1050757.12 34793.66 1082893.03 00:11:09.214 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x8000 length 0x8000 00:11:09.214 Nvme2n1 : 5.62 179.26 11.20 0.00 0.00 670410.47 27167.65 697779.67 00:11:09.214 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0x8000 00:11:09.214 Nvme2n2 : 5.69 101.89 6.37 0.00 0.00 1121882.02 35508.60 2196290.09 00:11:09.214 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x8000 length 0x8000 00:11:09.214 Nvme2n2 : 5.63 179.20 11.20 0.00 0.00 655673.85 28716.68 674901.64 00:11:09.214 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0x8000 00:11:09.214 Nvme2n3 : 5.73 118.53 7.41 0.00 0.00 942778.40 17396.83 1715851.64 00:11:09.214 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x8000 length 0x8000 00:11:09.214 Nvme2n3 : 5.63 182.67 11.42 0.00 0.00 630635.92 35746.91 758787.72 00:11:09.214 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x0 length 0x2000 00:11:09.214 Nvme3n1 : 5.79 136.36 8.52 0.00 0.00 803183.92 1027.72 2303054.20 00:11:09.214 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:09.214 Verification LBA range: start 0x2000 length 0x2000 00:11:09.214 Nvme3n1 : 5.64 192.38 12.02 0.00 0.00 587159.98 2844.86 762600.73 00:11:09.214 =================================================================================================================== 00:11:09.214 Total : 1778.47 111.15 0.00 0.00 795645.72 1027.72 2303054.20 00:11:10.150 00:11:10.150 real 0m8.962s 00:11:10.150 user 0m16.362s 00:11:10.150 sys 0m0.369s 00:11:10.150 17:02:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.150 ************************************ 00:11:10.150 END TEST bdev_verify_big_io 00:11:10.150 ************************************ 00:11:10.150 17:02:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.150 17:02:02 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.150 17:02:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:10.150 17:02:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.150 17:02:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:10.150 ************************************ 00:11:10.150 START TEST bdev_write_zeroes 00:11:10.150 ************************************ 00:11:10.150 17:02:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:10.431 [2024-07-25 17:02:02.668305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:10.431 [2024-07-25 17:02:02.668478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66245 ] 00:11:10.431 [2024-07-25 17:02:02.843555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.690 [2024-07-25 17:02:03.088605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.622 Running I/O for 1 seconds... 00:11:12.556 00:11:12.556 Latency(us) 00:11:12.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.556 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme0n1 : 1.01 8640.02 33.75 0.00 0.00 14755.56 8102.63 29074.15 00:11:12.556 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme1n1 : 1.02 8626.11 33.70 0.00 0.00 14753.71 11677.32 22163.08 00:11:12.556 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme2n1 : 1.02 8659.10 33.82 0.00 0.00 14687.58 8996.31 19779.96 00:11:12.556 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme2n2 : 1.02 8645.81 33.77 0.00 0.00 14643.22 9234.62 17754.30 00:11:12.556 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme2n3 : 1.02 8632.71 33.72 0.00 0.00 14632.98 8757.99 17873.45 00:11:12.556 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:12.556 Nvme3n1 : 1.02 8619.65 33.67 0.00 0.00 14623.45 8579.26 18111.77 00:11:12.556 =================================================================================================================== 00:11:12.556 Total : 51823.40 202.44 0.00 0.00 14682.58 8102.63 29074.15 00:11:13.930 00:11:13.930 real 0m3.519s 00:11:13.930 user 0m3.094s 00:11:13.930 sys 0m0.300s 00:11:13.930 17:02:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.930 17:02:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:13.930 ************************************ 00:11:13.930 END TEST bdev_write_zeroes 00:11:13.930 ************************************ 00:11:13.930 17:02:06 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.930 17:02:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:13.930 17:02:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.930 17:02:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:13.930 ************************************ 00:11:13.930 START TEST bdev_json_nonenclosed 00:11:13.930 ************************************ 00:11:13.930 17:02:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:13.930 [2024-07-25 17:02:06.227526] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:13.930 [2024-07-25 17:02:06.227695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66309 ] 00:11:14.189 [2024-07-25 17:02:06.402958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.189 [2024-07-25 17:02:06.643668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.189 [2024-07-25 17:02:06.643787] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:14.189 [2024-07-25 17:02:06.643820] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:14.189 [2024-07-25 17:02:06.643838] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:14.754 00:11:14.754 real 0m0.928s 00:11:14.754 user 0m0.676s 00:11:14.754 sys 0m0.146s 00:11:14.754 17:02:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.754 17:02:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:14.754 ************************************ 00:11:14.754 END TEST bdev_json_nonenclosed 00:11:14.754 ************************************ 00:11:14.754 17:02:07 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:14.754 17:02:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:14.754 17:02:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.754 17:02:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.754 ************************************ 00:11:14.754 START TEST bdev_json_nonarray 00:11:14.754 ************************************ 00:11:14.754 17:02:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:14.754 [2024-07-25 17:02:07.209133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:14.754 [2024-07-25 17:02:07.209303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66340 ] 00:11:15.012 [2024-07-25 17:02:07.375033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.270 [2024-07-25 17:02:07.630988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.270 [2024-07-25 17:02:07.631131] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:15.270 [2024-07-25 17:02:07.631166] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:15.270 [2024-07-25 17:02:07.631184] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:15.835 00:11:15.835 real 0m0.934s 00:11:15.835 user 0m0.676s 00:11:15.835 sys 0m0.152s 00:11:15.835 17:02:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.835 17:02:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:15.835 ************************************ 00:11:15.835 END TEST bdev_json_nonarray 00:11:15.835 ************************************ 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:15.835 17:02:08 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:15.835 00:11:15.835 real 0m45.731s 00:11:15.835 user 1m7.552s 00:11:15.835 sys 0m7.194s 00:11:15.835 17:02:08 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.835 ************************************ 00:11:15.835 17:02:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:15.835 END TEST blockdev_nvme 00:11:15.835 ************************************ 00:11:15.835 17:02:08 -- spdk/autotest.sh@217 -- # uname -s 00:11:15.835 17:02:08 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:11:15.835 17:02:08 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:15.835 17:02:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.835 17:02:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.835 17:02:08 -- common/autotest_common.sh@10 -- # set +x 00:11:15.835 ************************************ 00:11:15.835 START TEST blockdev_nvme_gpt 00:11:15.835 ************************************ 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:15.835 * Looking for test storage... 00:11:15.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66416 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66416 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 66416 ']' 00:11:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.835 17:02:08 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.835 17:02:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.092 [2024-07-25 17:02:08.375666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:16.092 [2024-07-25 17:02:08.375859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66416 ] 00:11:16.092 [2024-07-25 17:02:08.550592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.664 [2024-07-25 17:02:08.820179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.230 17:02:09 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.230 17:02:09 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:11:17.230 17:02:09 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:17.230 17:02:09 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:11:17.230 17:02:09 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:17.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.797 Waiting for block devices as requested 00:11:17.797 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.056 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.056 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:18.056 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.331 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:23.331 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.331 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:23.332 BYT; 00:11:23.332 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:23.332 BYT; 00:11:23.332 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:23.332 17:02:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:23.332 17:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:24.266 The operation has completed successfully. 00:11:24.266 17:02:16 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:25.640 The operation has completed successfully. 00:11:25.640 17:02:17 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.464 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.464 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.464 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.464 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.464 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:26.464 17:02:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.464 17:02:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.464 [] 00:11:26.464 17:02:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.464 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:26.464 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:26.464 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:26.464 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:26.722 17:02:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:26.722 17:02:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.722 17:02:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:26.980 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:26.980 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a4e81c5d-9a0b-4b7e-a804-f456faee208f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a4e81c5d-9a0b-4b7e-a804-f456faee208f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c5f48e75-35ff-43cd-93a3-3acc7f1f8a21"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5f48e75-35ff-43cd-93a3-3acc7f1f8a21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "774937e6-0aa0-46f4-a185-ccc6c2c4c7fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "774937e6-0aa0-46f4-a185-ccc6c2c4c7fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ca60e490-994a-4a72-ba0e-7fcb848db2b6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca60e490-994a-4a72-ba0e-7fcb848db2b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6c48441c-525c-46e5-93b7-97abc9392d55"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6c48441c-525c-46e5-93b7-97abc9392d55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:27.238 17:02:19 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 66416 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 66416 ']' 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 66416 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66416 00:11:27.239 killing process with pid 66416 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66416' 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 66416 00:11:27.239 17:02:19 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 66416 00:11:29.793 17:02:21 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:29.793 17:02:21 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:29.793 17:02:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:29.793 17:02:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.793 17:02:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:29.793 ************************************ 00:11:29.793 START TEST bdev_hello_world 00:11:29.793 ************************************ 00:11:29.793 17:02:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:29.793 [2024-07-25 17:02:21.884479] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:29.793 [2024-07-25 17:02:21.884663] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67048 ] 00:11:29.793 [2024-07-25 17:02:22.059292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.052 [2024-07-25 17:02:22.340210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.618 [2024-07-25 17:02:22.992914] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:30.618 [2024-07-25 17:02:22.993012] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:30.618 [2024-07-25 17:02:22.993059] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:30.618 [2024-07-25 17:02:22.996490] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:30.618 [2024-07-25 17:02:22.997117] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:30.618 [2024-07-25 17:02:22.997160] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:30.618 [2024-07-25 17:02:22.997397] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:30.618 00:11:30.618 [2024-07-25 17:02:22.997432] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:32.000 00:11:32.000 real 0m2.405s 00:11:32.000 user 0m2.019s 00:11:32.000 sys 0m0.275s 00:11:32.000 ************************************ 00:11:32.000 END TEST bdev_hello_world 00:11:32.000 ************************************ 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:32.000 17:02:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:32.000 17:02:24 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.000 17:02:24 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.000 17:02:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:32.000 ************************************ 00:11:32.000 START TEST bdev_bounds 00:11:32.000 ************************************ 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:32.000 Process bdevio pid: 67095 00:11:32.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67095 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67095' 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67095 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 67095 ']' 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:32.000 17:02:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:32.000 [2024-07-25 17:02:24.345266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:32.000 [2024-07-25 17:02:24.345726] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67095 ] 00:11:32.258 [2024-07-25 17:02:24.521039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.515 [2024-07-25 17:02:24.780853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.515 [2024-07-25 17:02:24.780956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.515 [2024-07-25 17:02:24.780970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.081 17:02:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:33.081 17:02:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:33.081 17:02:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:33.339 I/O targets: 00:11:33.339 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:33.339 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:33.339 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:33.339 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:33.339 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:33.339 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:33.339 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:33.339 00:11:33.339 00:11:33.339 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.339 http://cunit.sourceforge.net/ 00:11:33.339 00:11:33.339 00:11:33.339 Suite: bdevio tests on: Nvme3n1 00:11:33.339 Test: blockdev write read block ...passed 00:11:33.339 Test: blockdev write zeroes read block ...passed 00:11:33.339 Test: blockdev write zeroes read no split ...passed 00:11:33.339 Test: blockdev write zeroes read split ...passed 00:11:33.339 Test: blockdev write zeroes read split partial ...passed 00:11:33.339 Test: blockdev reset ...[2024-07-25 17:02:25.663906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:11:33.339 [2024-07-25 17:02:25.668790] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.339 passed 00:11:33.339 Test: blockdev write read 8 blocks ...passed 00:11:33.339 Test: blockdev write read size > 128k ...passed 00:11:33.339 Test: blockdev write read invalid size ...passed 00:11:33.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.339 Test: blockdev write read max offset ...passed 00:11:33.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.339 Test: blockdev writev readv 8 blocks ...passed 00:11:33.339 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.339 Test: blockdev writev readv block ...passed 00:11:33.339 Test: blockdev writev readv size > 128k ...passed 00:11:33.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.339 Test: blockdev comparev and writev ...[2024-07-25 17:02:25.681117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b006000 len:0x1000 00:11:33.340 [2024-07-25 17:02:25.681218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.340 passed 00:11:33.340 Test: blockdev nvme passthru rw ...passed 00:11:33.340 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.340 Test: blockdev nvme admin passthru ...[2024-07-25 17:02:25.682409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:33.340 [2024-07-25 17:02:25.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:33.340 passed 00:11:33.340 Test: blockdev copy ...passed 00:11:33.340 Suite: bdevio tests on: Nvme2n3 00:11:33.340 Test: blockdev write read block ...passed 00:11:33.340 Test: blockdev write zeroes read block ...passed 00:11:33.340 Test: blockdev write zeroes read no split ...passed 00:11:33.340 Test: blockdev write zeroes read split ...passed 00:11:33.340 Test: blockdev write zeroes read split partial ...passed 00:11:33.340 Test: blockdev reset ...[2024-07-25 17:02:25.744500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:33.340 [2024-07-25 17:02:25.749821] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.340 passed 00:11:33.340 Test: blockdev write read 8 blocks ...passed 00:11:33.340 Test: blockdev write read size > 128k ...passed 00:11:33.340 Test: blockdev write read invalid size ...passed 00:11:33.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.340 Test: blockdev write read max offset ...passed 00:11:33.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.340 Test: blockdev writev readv 8 blocks ...passed 00:11:33.340 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.340 Test: blockdev writev readv block ...passed 00:11:33.340 Test: blockdev writev readv size > 128k ...passed 00:11:33.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.340 Test: blockdev comparev and writev ...[2024-07-25 17:02:25.760435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27d03c000 len:0x1000 00:11:33.340 [2024-07-25 17:02:25.760508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.340 passed 00:11:33.340 Test: blockdev nvme passthru rw ...passed 00:11:33.340 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.340 Test: blockdev nvme admin passthru ...[2024-07-25 17:02:25.761641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:33.340 [2024-07-25 17:02:25.761693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:33.340 passed 00:11:33.340 Test: blockdev copy ...passed 00:11:33.340 Suite: bdevio tests on: Nvme2n2 00:11:33.340 Test: blockdev write read block ...passed 00:11:33.340 Test: blockdev write zeroes read block ...passed 00:11:33.340 Test: blockdev write zeroes read no split ...passed 00:11:33.340 Test: blockdev write zeroes read split ...passed 00:11:33.598 Test: blockdev write zeroes read split partial ...passed 00:11:33.598 Test: blockdev reset ...[2024-07-25 17:02:25.824663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:33.598 [2024-07-25 17:02:25.829258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.598 passed 00:11:33.598 Test: blockdev write read 8 blocks ...passed 00:11:33.598 Test: blockdev write read size > 128k ...passed 00:11:33.598 Test: blockdev write read invalid size ...passed 00:11:33.598 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.598 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.598 Test: blockdev write read max offset ...passed 00:11:33.598 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.598 Test: blockdev writev readv 8 blocks ...passed 00:11:33.598 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.598 Test: blockdev writev readv block ...passed 00:11:33.598 Test: blockdev writev readv size > 128k ...passed 00:11:33.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.598 Test: blockdev comparev and writev ...[2024-07-25 17:02:25.838113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27d036000 len:0x1000 00:11:33.598 [2024-07-25 17:02:25.838192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.598 passed 00:11:33.598 Test: blockdev nvme passthru rw ...passed 00:11:33.598 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.598 Test: blockdev nvme admin passthru ...[2024-07-25 17:02:25.839121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:33.598 [2024-07-25 17:02:25.839172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:33.598 passed 00:11:33.598 Test: blockdev copy ...passed 00:11:33.598 Suite: bdevio tests on: Nvme2n1 00:11:33.598 Test: blockdev write read block ...passed 00:11:33.598 Test: blockdev write zeroes read block ...passed 00:11:33.598 Test: blockdev write zeroes read no split ...passed 00:11:33.598 Test: blockdev write zeroes read split ...passed 00:11:33.598 Test: blockdev write zeroes read split partial ...passed 00:11:33.598 Test: blockdev reset ...[2024-07-25 17:02:25.904773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:11:33.598 [2024-07-25 17:02:25.909233] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.598 passed 00:11:33.598 Test: blockdev write read 8 blocks ...passed 00:11:33.598 Test: blockdev write read size > 128k ...passed 00:11:33.598 Test: blockdev write read invalid size ...passed 00:11:33.598 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.598 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.598 Test: blockdev write read max offset ...passed 00:11:33.598 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.598 Test: blockdev writev readv 8 blocks ...passed 00:11:33.598 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.598 Test: blockdev writev readv block ...passed 00:11:33.598 Test: blockdev writev readv size > 128k ...passed 00:11:33.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.598 Test: blockdev comparev and writev ...[2024-07-25 17:02:25.918609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27d032000 len:0x1000 00:11:33.598 [2024-07-25 17:02:25.918703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.598 passed 00:11:33.598 Test: blockdev nvme passthru rw ...passed 00:11:33.598 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.599 Test: blockdev nvme admin passthru ...[2024-07-25 17:02:25.919615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:33.599 [2024-07-25 17:02:25.919669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:33.599 passed 00:11:33.599 Test: blockdev copy ...passed 00:11:33.599 Suite: bdevio tests on: Nvme1n1p2 00:11:33.599 Test: blockdev write read block ...passed 00:11:33.599 Test: blockdev write zeroes read block ...passed 00:11:33.599 Test: blockdev write zeroes read no split ...passed 00:11:33.599 Test: blockdev write zeroes read split ...passed 00:11:33.599 Test: blockdev write zeroes read split partial ...passed 00:11:33.599 Test: blockdev reset ...[2024-07-25 17:02:25.987288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:11:33.599 [2024-07-25 17:02:25.991430] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.599 passed 00:11:33.599 Test: blockdev write read 8 blocks ...passed 00:11:33.599 Test: blockdev write read size > 128k ...passed 00:11:33.599 Test: blockdev write read invalid size ...passed 00:11:33.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.599 Test: blockdev write read max offset ...passed 00:11:33.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.599 Test: blockdev writev readv 8 blocks ...passed 00:11:33.599 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.599 Test: blockdev writev readv block ...passed 00:11:33.599 Test: blockdev writev readv size > 128k ...passed 00:11:33.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.599 Test: blockdev comparev and writev ...[2024-07-25 17:02:26.000995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27d02e000 len:0x1000 00:11:33.599 [2024-07-25 17:02:26.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.599 passed 00:11:33.599 Test: blockdev nvme passthru rw ...passed 00:11:33.599 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.599 Test: blockdev nvme admin passthru ...passed 00:11:33.599 Test: blockdev copy ...passed 00:11:33.599 Suite: bdevio tests on: Nvme1n1p1 00:11:33.599 Test: blockdev write read block ...passed 00:11:33.599 Test: blockdev write zeroes read block ...passed 00:11:33.599 Test: blockdev write zeroes read no split ...passed 00:11:33.599 Test: blockdev write zeroes read split ...passed 00:11:33.599 Test: blockdev write zeroes read split partial ...passed 00:11:33.599 Test: blockdev reset ...[2024-07-25 17:02:26.058968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:11:33.599 [2024-07-25 17:02:26.062964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.599 passed 00:11:33.599 Test: blockdev write read 8 blocks ...passed 00:11:33.599 Test: blockdev write read size > 128k ...passed 00:11:33.599 Test: blockdev write read invalid size ...passed 00:11:33.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.857 Test: blockdev write read max offset ...passed 00:11:33.857 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.857 Test: blockdev writev readv 8 blocks ...passed 00:11:33.857 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.857 Test: blockdev writev readv block ...passed 00:11:33.857 Test: blockdev writev readv size > 128k ...passed 00:11:33.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.857 Test: blockdev comparev and writev ...[2024-07-25 17:02:26.072211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27de0e000 len:0x1000 00:11:33.857 [2024-07-25 17:02:26.072273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:33.857 passed 00:11:33.857 Test: blockdev nvme passthru rw ...passed 00:11:33.857 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.857 Test: blockdev nvme admin passthru ...passed 00:11:33.857 Test: blockdev copy ...passed 00:11:33.857 Suite: bdevio tests on: Nvme0n1 00:11:33.857 Test: blockdev write read block ...passed 00:11:33.857 Test: blockdev write zeroes read block ...passed 00:11:33.857 Test: blockdev write zeroes read no split ...passed 00:11:33.857 Test: blockdev write zeroes read split ...passed 00:11:33.857 Test: blockdev write zeroes read split partial ...passed 00:11:33.858 Test: blockdev reset ...[2024-07-25 17:02:26.128353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:33.858 [2024-07-25 17:02:26.132366] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:33.858 passed 00:11:33.858 Test: blockdev write read 8 blocks ...passed 00:11:33.858 Test: blockdev write read size > 128k ...passed 00:11:33.858 Test: blockdev write read invalid size ...passed 00:11:33.858 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.858 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.858 Test: blockdev write read max offset ...passed 00:11:33.858 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.858 Test: blockdev writev readv 8 blocks ...passed 00:11:33.858 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.858 Test: blockdev writev readv block ...passed 00:11:33.858 Test: blockdev writev readv size > 128k ...passed 00:11:33.858 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.858 Test: blockdev comparev and writev ...passed 00:11:33.858 Test: blockdev nvme passthru rw ...[2024-07-25 17:02:26.141378] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:33.858 separate metadata which is not supported yet. 00:11:33.858 passed 00:11:33.858 Test: blockdev nvme passthru vendor specific ...passed 00:11:33.858 Test: blockdev nvme admin passthru ...[2024-07-25 17:02:26.141989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:33.858 [2024-07-25 17:02:26.142051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:33.858 passed 00:11:33.858 Test: blockdev copy ...passed 00:11:33.858 00:11:33.858 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.858 suites 7 7 n/a 0 0 00:11:33.858 tests 161 161 161 0 0 00:11:33.858 asserts 1025 1025 1025 0 n/a 00:11:33.858 00:11:33.858 Elapsed time = 1.485 seconds 00:11:33.858 0 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67095 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 67095 ']' 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 67095 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67095 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67095' 00:11:33.858 killing process with pid 67095 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 67095 00:11:33.858 17:02:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 67095 00:11:34.792 17:02:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:34.792 00:11:34.792 real 0m2.993s 00:11:34.792 user 0m7.187s 00:11:34.792 sys 0m0.434s 00:11:34.792 17:02:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.792 17:02:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:34.792 ************************************ 00:11:34.792 END TEST bdev_bounds 00:11:34.792 ************************************ 00:11:35.050 17:02:27 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:35.050 17:02:27 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:35.050 17:02:27 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.050 17:02:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:35.050 ************************************ 00:11:35.050 START TEST bdev_nbd 00:11:35.050 ************************************ 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67155 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67155 /var/tmp/spdk-nbd.sock 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 67155 ']' 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:35.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.050 17:02:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:35.050 [2024-07-25 17:02:27.386845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:35.050 [2024-07-25 17:02:27.387293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.309 [2024-07-25 17:02:27.557213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.568 [2024-07-25 17:02:27.807688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:36.135 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.394 1+0 records in 00:11:36.394 1+0 records out 00:11:36.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557702 s, 7.3 MB/s 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:36.394 17:02:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.652 1+0 records in 00:11:36.652 1+0 records out 00:11:36.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708762 s, 5.8 MB/s 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:36.652 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.910 1+0 records in 00:11:36.910 1+0 records out 00:11:36.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841773 s, 4.9 MB/s 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:36.910 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.168 1+0 records in 00:11:37.168 1+0 records out 00:11:37.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882586 s, 4.6 MB/s 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:37.168 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:37.425 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.682 1+0 records in 00:11:37.682 1+0 records out 00:11:37.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908316 s, 4.5 MB/s 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:37.682 17:02:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.940 1+0 records in 00:11:37.940 1+0 records out 00:11:37.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745646 s, 5.5 MB/s 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:37.940 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.198 1+0 records in 00:11:38.198 1+0 records out 00:11:38.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894525 s, 4.6 MB/s 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:38.198 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd0", 00:11:38.456 "bdev_name": "Nvme0n1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd1", 00:11:38.456 "bdev_name": "Nvme1n1p1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd2", 00:11:38.456 "bdev_name": "Nvme1n1p2" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd3", 00:11:38.456 "bdev_name": "Nvme2n1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd4", 00:11:38.456 "bdev_name": "Nvme2n2" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd5", 00:11:38.456 "bdev_name": "Nvme2n3" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd6", 00:11:38.456 "bdev_name": "Nvme3n1" 00:11:38.456 } 00:11:38.456 ]' 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd0", 00:11:38.456 "bdev_name": "Nvme0n1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd1", 00:11:38.456 "bdev_name": "Nvme1n1p1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd2", 00:11:38.456 "bdev_name": "Nvme1n1p2" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd3", 00:11:38.456 "bdev_name": "Nvme2n1" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd4", 00:11:38.456 "bdev_name": "Nvme2n2" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd5", 00:11:38.456 "bdev_name": "Nvme2n3" 00:11:38.456 }, 00:11:38.456 { 00:11:38.456 "nbd_device": "/dev/nbd6", 00:11:38.456 "bdev_name": "Nvme3n1" 00:11:38.456 } 00:11:38.456 ]' 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.456 17:02:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.022 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.338 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:39.596 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:39.596 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:39.596 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:39.596 17:02:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.596 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.854 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.110 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.367 17:02:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:40.932 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:40.933 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:41.189 /dev/nbd0 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.189 1+0 records in 00:11:41.189 1+0 records out 00:11:41.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661656 s, 6.2 MB/s 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:41.189 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:41.446 /dev/nbd1 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.446 1+0 records in 00:11:41.446 1+0 records out 00:11:41.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781512 s, 5.2 MB/s 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:41.446 17:02:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:41.704 /dev/nbd10 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.704 1+0 records in 00:11:41.704 1+0 records out 00:11:41.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675843 s, 6.1 MB/s 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:41.704 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:41.961 /dev/nbd11 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.961 1+0 records in 00:11:41.961 1+0 records out 00:11:41.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655147 s, 6.3 MB/s 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:41.961 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:42.218 /dev/nbd12 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.219 1+0 records in 00:11:42.219 1+0 records out 00:11:42.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660506 s, 6.2 MB/s 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:42.219 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:42.476 /dev/nbd13 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.476 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.733 1+0 records in 00:11:42.733 1+0 records out 00:11:42.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900404 s, 4.5 MB/s 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:42.733 17:02:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:42.991 /dev/nbd14 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.991 1+0 records in 00:11:42.991 1+0 records out 00:11:42.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890166 s, 4.6 MB/s 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.991 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:43.248 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd0", 00:11:43.248 "bdev_name": "Nvme0n1" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd1", 00:11:43.248 "bdev_name": "Nvme1n1p1" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd10", 00:11:43.248 "bdev_name": "Nvme1n1p2" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd11", 00:11:43.248 "bdev_name": "Nvme2n1" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd12", 00:11:43.248 "bdev_name": "Nvme2n2" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd13", 00:11:43.248 "bdev_name": "Nvme2n3" 00:11:43.248 }, 00:11:43.248 { 00:11:43.248 "nbd_device": "/dev/nbd14", 00:11:43.248 "bdev_name": "Nvme3n1" 00:11:43.249 } 00:11:43.249 ]' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd0", 00:11:43.249 "bdev_name": "Nvme0n1" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd1", 00:11:43.249 "bdev_name": "Nvme1n1p1" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd10", 00:11:43.249 "bdev_name": "Nvme1n1p2" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd11", 00:11:43.249 "bdev_name": "Nvme2n1" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd12", 00:11:43.249 "bdev_name": "Nvme2n2" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd13", 00:11:43.249 "bdev_name": "Nvme2n3" 00:11:43.249 }, 00:11:43.249 { 00:11:43.249 "nbd_device": "/dev/nbd14", 00:11:43.249 "bdev_name": "Nvme3n1" 00:11:43.249 } 00:11:43.249 ]' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:43.249 /dev/nbd1 00:11:43.249 /dev/nbd10 00:11:43.249 /dev/nbd11 00:11:43.249 /dev/nbd12 00:11:43.249 /dev/nbd13 00:11:43.249 /dev/nbd14' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:43.249 /dev/nbd1 00:11:43.249 /dev/nbd10 00:11:43.249 /dev/nbd11 00:11:43.249 /dev/nbd12 00:11:43.249 /dev/nbd13 00:11:43.249 /dev/nbd14' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:43.249 256+0 records in 00:11:43.249 256+0 records out 00:11:43.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585878 s, 179 MB/s 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:43.249 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:43.506 256+0 records in 00:11:43.506 256+0 records out 00:11:43.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170482 s, 6.2 MB/s 00:11:43.507 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:43.507 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:43.507 256+0 records in 00:11:43.507 256+0 records out 00:11:43.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159799 s, 6.6 MB/s 00:11:43.507 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:43.507 17:02:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:43.764 256+0 records in 00:11:43.764 256+0 records out 00:11:43.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178426 s, 5.9 MB/s 00:11:43.764 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:43.764 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:44.022 256+0 records in 00:11:44.022 256+0 records out 00:11:44.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16754 s, 6.3 MB/s 00:11:44.022 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.022 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:44.279 256+0 records in 00:11:44.279 256+0 records out 00:11:44.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17188 s, 6.1 MB/s 00:11:44.280 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.280 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:44.280 256+0 records in 00:11:44.280 256+0 records out 00:11:44.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158665 s, 6.6 MB/s 00:11:44.280 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.280 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:44.537 256+0 records in 00:11:44.537 256+0 records out 00:11:44.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165453 s, 6.3 MB/s 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.538 17:02:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.795 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.053 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.311 17:02:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:45.875 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:45.876 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.876 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.876 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:45.876 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.133 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.390 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.647 17:02:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:46.904 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:46.905 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:47.162 malloc_lvol_verify 00:11:47.163 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:47.463 39ebfa1d-bae8-4df6-92fa-c0b16fa16007 00:11:47.463 17:02:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:47.721 8e4b4161-32d6-48bc-84ce-b8d0307e110c 00:11:47.721 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:47.980 /dev/nbd0 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:47.980 mke2fs 1.46.5 (30-Dec-2021) 00:11:47.980 Discarding device blocks: 0/4096 done 00:11:47.980 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:47.980 00:11:47.980 Allocating group tables: 0/1 done 00:11:47.980 Writing inode tables: 0/1 done 00:11:47.980 Creating journal (1024 blocks): done 00:11:47.980 Writing superblocks and filesystem accounting information: 0/1 done 00:11:47.980 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.980 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67155 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 67155 ']' 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 67155 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67155 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.245 killing process with pid 67155 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67155' 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 67155 00:11:48.245 17:02:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 67155 00:11:50.146 17:02:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:50.146 00:11:50.146 real 0m14.809s 00:11:50.146 user 0m20.706s 00:11:50.146 sys 0m4.790s 00:11:50.146 ************************************ 00:11:50.146 END TEST bdev_nbd 00:11:50.146 ************************************ 00:11:50.146 17:02:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.146 17:02:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:50.146 skipping fio tests on NVMe due to multi-ns failures. 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:50.146 17:02:42 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:50.146 17:02:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:50.146 17:02:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.146 17:02:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:50.146 ************************************ 00:11:50.146 START TEST bdev_verify 00:11:50.146 ************************************ 00:11:50.146 17:02:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:50.146 [2024-07-25 17:02:42.252532] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:50.146 [2024-07-25 17:02:42.252714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67610 ] 00:11:50.146 [2024-07-25 17:02:42.425721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.405 [2024-07-25 17:02:42.709292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.405 [2024-07-25 17:02:42.709320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.339 Running I/O for 5 seconds... 00:11:56.644 00:11:56.644 Latency(us) 00:11:56.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0xbd0bd 00:11:56.644 Nvme0n1 : 5.07 1425.47 5.57 0.00 0.00 89319.13 11617.75 89128.96 00:11:56.644 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:56.644 Nvme0n1 : 5.07 1451.35 5.67 0.00 0.00 87716.79 12690.15 92465.34 00:11:56.644 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x4ff80 00:11:56.644 Nvme1n1p1 : 5.09 1432.06 5.59 0.00 0.00 89009.39 16562.73 81979.58 00:11:56.644 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:56.644 Nvme1n1p1 : 5.09 1457.78 5.69 0.00 0.00 87421.08 17635.14 85315.96 00:11:56.644 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x4ff7f 00:11:56.644 Nvme1n1p2 : 5.10 1430.97 5.59 0.00 0.00 88860.74 17992.61 75783.45 00:11:56.644 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:56.644 Nvme1n1p2 : 5.09 1457.18 5.69 0.00 0.00 87216.95 17754.30 74353.57 00:11:56.644 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x80000 00:11:56.644 Nvme2n1 : 5.10 1430.01 5.59 0.00 0.00 88722.13 18945.86 71493.82 00:11:56.644 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x80000 length 0x80000 00:11:56.644 Nvme2n1 : 5.10 1456.11 5.69 0.00 0.00 87094.74 19779.96 65297.69 00:11:56.644 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x80000 00:11:56.644 Nvme2n2 : 5.11 1429.04 5.58 0.00 0.00 88582.64 20733.21 74353.57 00:11:56.644 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x80000 length 0x80000 00:11:56.644 Nvme2n2 : 5.10 1455.14 5.68 0.00 0.00 86964.12 21448.15 65774.31 00:11:56.644 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x80000 00:11:56.644 Nvme2n3 : 5.11 1428.08 5.58 0.00 0.00 88438.93 20137.43 77213.32 00:11:56.644 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x80000 length 0x80000 00:11:56.644 Nvme2n3 : 5.11 1454.15 5.68 0.00 0.00 86822.87 20375.74 67680.81 00:11:56.644 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x0 length 0x20000 00:11:56.644 Nvme3n1 : 5.11 1427.19 5.57 0.00 0.00 88294.66 13822.14 81026.33 00:11:56.644 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:56.644 Verification LBA range: start 0x20000 length 0x20000 00:11:56.644 Nvme3n1 : 5.11 1453.16 5.68 0.00 0.00 86685.38 15252.01 69587.32 00:11:56.644 =================================================================================================================== 00:11:56.644 Total : 20187.70 78.86 0.00 0.00 87931.50 11617.75 92465.34 00:11:58.019 00:11:58.019 real 0m8.136s 00:11:58.019 user 0m14.610s 00:11:58.019 sys 0m0.389s 00:11:58.019 17:02:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.019 17:02:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:58.019 ************************************ 00:11:58.019 END TEST bdev_verify 00:11:58.019 ************************************ 00:11:58.019 17:02:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:58.019 17:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:58.019 17:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.019 17:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:58.019 ************************************ 00:11:58.019 START TEST bdev_verify_big_io 00:11:58.019 ************************************ 00:11:58.019 17:02:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:58.019 [2024-07-25 17:02:50.457571] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:58.019 [2024-07-25 17:02:50.457803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67714 ] 00:11:58.277 [2024-07-25 17:02:50.668389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.535 [2024-07-25 17:02:50.964053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.535 [2024-07-25 17:02:50.964072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.470 Running I/O for 5 seconds... 00:12:06.025 00:12:06.025 Latency(us) 00:12:06.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.025 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0xbd0b 00:12:06.025 Nvme0n1 : 5.76 114.90 7.18 0.00 0.00 1063773.39 22520.55 1273543.21 00:12:06.025 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:06.025 Nvme0n1 : 5.69 149.38 9.34 0.00 0.00 828356.12 23473.80 907494.87 00:12:06.025 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x4ff8 00:12:06.025 Nvme1n1p1 : 5.84 120.53 7.53 0.00 0.00 988061.83 74353.57 1075267.03 00:12:06.025 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x4ff8 length 0x4ff8 00:12:06.025 Nvme1n1p1 : 5.69 150.39 9.40 0.00 0.00 800278.08 62437.93 793104.76 00:12:06.025 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x4ff7 00:12:06.025 Nvme1n1p2 : 5.84 120.46 7.53 0.00 0.00 950204.97 76260.07 892242.85 00:12:06.025 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x4ff7 length 0x4ff7 00:12:06.025 Nvme1n1p2 : 5.77 156.29 9.77 0.00 0.00 753707.07 66727.56 800730.76 00:12:06.025 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x8000 00:12:06.025 Nvme2n1 : 5.93 122.97 7.69 0.00 0.00 901717.66 44564.48 949437.91 00:12:06.025 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x8000 length 0x8000 00:12:06.025 Nvme2n1 : 5.77 155.73 9.73 0.00 0.00 736955.73 66727.56 804543.77 00:12:06.025 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x8000 00:12:06.025 Nvme2n2 : 5.97 128.26 8.02 0.00 0.00 846476.62 22163.08 1647217.57 00:12:06.025 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x8000 length 0x8000 00:12:06.025 Nvme2n2 : 5.78 158.78 9.92 0.00 0.00 710145.68 66250.94 789291.75 00:12:06.025 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x8000 00:12:06.025 Nvme2n3 : 6.02 139.45 8.72 0.00 0.00 757043.98 17039.36 1845493.76 00:12:06.025 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x8000 length 0x8000 00:12:06.025 Nvme2n3 : 5.84 171.30 10.71 0.00 0.00 650919.94 24307.90 804543.77 00:12:06.025 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x0 length 0x2000 00:12:06.025 Nvme3n1 : 6.14 184.73 11.55 0.00 0.00 557548.40 1236.25 1860745.77 00:12:06.025 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:06.025 Verification LBA range: start 0x2000 length 0x2000 00:12:06.025 Nvme3n1 : 5.85 179.28 11.21 0.00 0.00 611048.12 3604.48 812169.77 00:12:06.025 =================================================================================================================== 00:12:06.025 Total : 2052.45 128.28 0.00 0.00 775663.81 1236.25 1860745.77 00:12:07.926 00:12:07.926 real 0m9.722s 00:12:07.926 user 0m17.667s 00:12:07.926 sys 0m0.481s 00:12:07.926 17:03:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.926 17:03:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.926 ************************************ 00:12:07.926 END TEST bdev_verify_big_io 00:12:07.926 ************************************ 00:12:07.926 17:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:07.926 17:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:07.926 17:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.926 17:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:07.926 ************************************ 00:12:07.926 START TEST bdev_write_zeroes 00:12:07.926 ************************************ 00:12:07.926 17:03:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:07.926 [2024-07-25 17:03:00.214211] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:07.926 [2024-07-25 17:03:00.214368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67834 ] 00:12:07.926 [2024-07-25 17:03:00.379967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.184 [2024-07-25 17:03:00.627109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.118 Running I/O for 1 seconds... 00:12:10.051 00:12:10.051 Latency(us) 00:12:10.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.051 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme0n1 : 1.02 7230.90 28.25 0.00 0.00 17637.40 12630.57 26333.56 00:12:10.051 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme1n1p1 : 1.02 7220.74 28.21 0.00 0.00 17630.79 13047.62 27048.49 00:12:10.051 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme1n1p2 : 1.02 7211.25 28.17 0.00 0.00 17593.71 12451.84 25499.46 00:12:10.051 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme2n1 : 1.02 7252.76 28.33 0.00 0.00 17438.34 8877.15 25022.84 00:12:10.051 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme2n2 : 1.02 7243.65 28.30 0.00 0.00 17429.75 9115.46 24903.68 00:12:10.051 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme2n3 : 1.03 7234.85 28.26 0.00 0.00 17398.73 9413.35 24784.52 00:12:10.051 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:10.051 Nvme3n1 : 1.03 7225.92 28.23 0.00 0.00 17388.24 8162.21 24665.37 00:12:10.051 =================================================================================================================== 00:12:10.051 Total : 50620.08 197.73 0.00 0.00 17501.98 8162.21 27048.49 00:12:11.427 00:12:11.427 real 0m3.474s 00:12:11.427 user 0m3.064s 00:12:11.427 sys 0m0.282s 00:12:11.427 17:03:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.427 17:03:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:11.427 ************************************ 00:12:11.427 END TEST bdev_write_zeroes 00:12:11.427 ************************************ 00:12:11.427 17:03:03 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:11.427 17:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:11.427 17:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.427 17:03:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:11.427 ************************************ 00:12:11.427 START TEST bdev_json_nonenclosed 00:12:11.427 ************************************ 00:12:11.427 17:03:03 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:11.427 [2024-07-25 17:03:03.756245] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:11.427 [2024-07-25 17:03:03.756438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67898 ] 00:12:11.686 [2024-07-25 17:03:03.931489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.967 [2024-07-25 17:03:04.199128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.967 [2024-07-25 17:03:04.199257] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:11.967 [2024-07-25 17:03:04.199290] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:11.967 [2024-07-25 17:03:04.199307] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.266 00:12:12.266 real 0m1.005s 00:12:12.266 user 0m0.718s 00:12:12.266 sys 0m0.179s 00:12:12.266 17:03:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.266 17:03:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:12.266 ************************************ 00:12:12.266 END TEST bdev_json_nonenclosed 00:12:12.266 ************************************ 00:12:12.266 17:03:04 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:12.266 17:03:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:12.266 17:03:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.266 17:03:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:12.266 ************************************ 00:12:12.266 START TEST bdev_json_nonarray 00:12:12.266 ************************************ 00:12:12.266 17:03:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:12.524 [2024-07-25 17:03:04.816688] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:12.524 [2024-07-25 17:03:04.816888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67929 ] 00:12:12.781 [2024-07-25 17:03:04.996049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.781 [2024-07-25 17:03:05.244263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.781 [2024-07-25 17:03:05.244398] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:12.781 [2024-07-25 17:03:05.244432] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:12.781 [2024-07-25 17:03:05.244450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:13.348 00:12:13.348 real 0m0.978s 00:12:13.348 user 0m0.716s 00:12:13.348 sys 0m0.154s 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:13.348 ************************************ 00:12:13.348 END TEST bdev_json_nonarray 00:12:13.348 ************************************ 00:12:13.348 17:03:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:12:13.348 17:03:05 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:12:13.348 17:03:05 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:13.348 17:03:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:13.348 17:03:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.348 17:03:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:13.348 ************************************ 00:12:13.348 START TEST bdev_gpt_uuid 00:12:13.348 ************************************ 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67959 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67959 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67959 ']' 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.348 17:03:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:13.606 [2024-07-25 17:03:05.846132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:13.606 [2024-07-25 17:03:05.846303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67959 ] 00:12:13.606 [2024-07-25 17:03:06.010997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.865 [2024-07-25 17:03:06.252815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.799 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.799 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:12:14.799 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:14.799 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.799 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:15.058 Some configs were skipped because the RPC state that can call them passed over. 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:12:15.058 { 00:12:15.058 "name": "Nvme1n1p1", 00:12:15.058 "aliases": [ 00:12:15.058 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:15.058 ], 00:12:15.058 "product_name": "GPT Disk", 00:12:15.058 "block_size": 4096, 00:12:15.058 "num_blocks": 655104, 00:12:15.058 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:15.058 "assigned_rate_limits": { 00:12:15.058 "rw_ios_per_sec": 0, 00:12:15.058 "rw_mbytes_per_sec": 0, 00:12:15.058 "r_mbytes_per_sec": 0, 00:12:15.058 "w_mbytes_per_sec": 0 00:12:15.058 }, 00:12:15.058 "claimed": false, 00:12:15.058 "zoned": false, 00:12:15.058 "supported_io_types": { 00:12:15.058 "read": true, 00:12:15.058 "write": true, 00:12:15.058 "unmap": true, 00:12:15.058 "flush": true, 00:12:15.058 "reset": true, 00:12:15.058 "nvme_admin": false, 00:12:15.058 "nvme_io": false, 00:12:15.058 "nvme_io_md": false, 00:12:15.058 "write_zeroes": true, 00:12:15.058 "zcopy": false, 00:12:15.058 "get_zone_info": false, 00:12:15.058 "zone_management": false, 00:12:15.058 "zone_append": false, 00:12:15.058 "compare": true, 00:12:15.058 "compare_and_write": false, 00:12:15.058 "abort": true, 00:12:15.058 "seek_hole": false, 00:12:15.058 "seek_data": false, 00:12:15.058 "copy": true, 00:12:15.058 "nvme_iov_md": false 00:12:15.058 }, 00:12:15.058 "driver_specific": { 00:12:15.058 "gpt": { 00:12:15.058 "base_bdev": "Nvme1n1", 00:12:15.058 "offset_blocks": 256, 00:12:15.058 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:15.058 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:15.058 "partition_name": "SPDK_TEST_first" 00:12:15.058 } 00:12:15.058 } 00:12:15.058 } 00:12:15.058 ]' 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:12:15.058 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:12:15.317 { 00:12:15.317 "name": "Nvme1n1p2", 00:12:15.317 "aliases": [ 00:12:15.317 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:15.317 ], 00:12:15.317 "product_name": "GPT Disk", 00:12:15.317 "block_size": 4096, 00:12:15.317 "num_blocks": 655103, 00:12:15.317 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:15.317 "assigned_rate_limits": { 00:12:15.317 "rw_ios_per_sec": 0, 00:12:15.317 "rw_mbytes_per_sec": 0, 00:12:15.317 "r_mbytes_per_sec": 0, 00:12:15.317 "w_mbytes_per_sec": 0 00:12:15.317 }, 00:12:15.317 "claimed": false, 00:12:15.317 "zoned": false, 00:12:15.317 "supported_io_types": { 00:12:15.317 "read": true, 00:12:15.317 "write": true, 00:12:15.317 "unmap": true, 00:12:15.317 "flush": true, 00:12:15.317 "reset": true, 00:12:15.317 "nvme_admin": false, 00:12:15.317 "nvme_io": false, 00:12:15.317 "nvme_io_md": false, 00:12:15.317 "write_zeroes": true, 00:12:15.317 "zcopy": false, 00:12:15.317 "get_zone_info": false, 00:12:15.317 "zone_management": false, 00:12:15.317 "zone_append": false, 00:12:15.317 "compare": true, 00:12:15.317 "compare_and_write": false, 00:12:15.317 "abort": true, 00:12:15.317 "seek_hole": false, 00:12:15.317 "seek_data": false, 00:12:15.317 "copy": true, 00:12:15.317 "nvme_iov_md": false 00:12:15.317 }, 00:12:15.317 "driver_specific": { 00:12:15.317 "gpt": { 00:12:15.317 "base_bdev": "Nvme1n1", 00:12:15.317 "offset_blocks": 655360, 00:12:15.317 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:15.317 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:15.317 "partition_name": "SPDK_TEST_second" 00:12:15.317 } 00:12:15.317 } 00:12:15.317 } 00:12:15.317 ]' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67959 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67959 ']' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67959 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67959 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.317 killing process with pid 67959 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67959' 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67959 00:12:15.317 17:03:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67959 00:12:17.849 00:12:17.849 real 0m4.244s 00:12:17.849 user 0m4.428s 00:12:17.849 sys 0m0.562s 00:12:17.849 17:03:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.849 ************************************ 00:12:17.849 END TEST bdev_gpt_uuid 00:12:17.849 ************************************ 00:12:17.849 17:03:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:17.849 17:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:18.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.108 Waiting for block devices as requested 00:12:18.366 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.366 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.366 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.625 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.889 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:23.889 17:03:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:23.889 17:03:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:23.889 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:23.889 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:23.889 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:23.889 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:23.889 17:03:16 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:23.889 00:12:23.889 real 1m8.036s 00:12:23.889 user 1m25.938s 00:12:23.889 sys 0m10.763s 00:12:23.889 ************************************ 00:12:23.889 END TEST blockdev_nvme_gpt 00:12:23.889 ************************************ 00:12:23.889 17:03:16 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.889 17:03:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:23.889 17:03:16 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:23.889 17:03:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:23.889 17:03:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.889 17:03:16 -- common/autotest_common.sh@10 -- # set +x 00:12:23.889 ************************************ 00:12:23.890 START TEST nvme 00:12:23.890 ************************************ 00:12:23.890 17:03:16 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:23.890 * Looking for test storage... 00:12:23.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:23.890 17:03:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:24.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.034 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.034 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.034 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.292 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:25.292 17:03:17 nvme -- nvme/nvme.sh@79 -- # uname 00:12:25.292 17:03:17 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:25.292 17:03:17 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:25.292 17:03:17 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1071 -- # stubpid=68605 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:25.292 Waiting for stub to ready for secondary processes... 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68605 ]] 00:12:25.292 17:03:17 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:12:25.292 [2024-07-25 17:03:17.688082] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:25.292 [2024-07-25 17:03:17.688278] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:26.225 17:03:18 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:26.225 17:03:18 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68605 ]] 00:12:26.225 17:03:18 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:12:26.791 [2024-07-25 17:03:19.184787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.049 [2024-07-25 17:03:19.469468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.049 [2024-07-25 17:03:19.469565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.049 [2024-07-25 17:03:19.469538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.049 [2024-07-25 17:03:19.490242] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:27.049 [2024-07-25 17:03:19.490294] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:27.049 [2024-07-25 17:03:19.501509] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:27.049 [2024-07-25 17:03:19.501705] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:27.049 [2024-07-25 17:03:19.505488] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:27.049 [2024-07-25 17:03:19.505779] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:27.049 [2024-07-25 17:03:19.505854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:27.049 [2024-07-25 17:03:19.508036] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:27.049 [2024-07-25 17:03:19.508250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:27.049 [2024-07-25 17:03:19.508333] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:27.049 [2024-07-25 17:03:19.511027] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:27.049 [2024-07-25 17:03:19.511288] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:27.049 [2024-07-25 17:03:19.511376] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:27.049 [2024-07-25 17:03:19.511431] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:27.049 [2024-07-25 17:03:19.511481] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:27.370 17:03:19 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:27.370 done. 00:12:27.370 17:03:19 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:12:27.370 17:03:19 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:27.370 17:03:19 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:12:27.370 17:03:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.370 17:03:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 ************************************ 00:12:27.370 START TEST nvme_reset 00:12:27.370 ************************************ 00:12:27.370 17:03:19 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:27.628 Initializing NVMe Controllers 00:12:27.628 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:27.628 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:27.628 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:27.628 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:27.628 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:27.628 00:12:27.628 real 0m0.288s 00:12:27.628 user 0m0.118s 00:12:27.628 sys 0m0.127s 00:12:27.628 17:03:19 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.628 17:03:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:27.628 ************************************ 00:12:27.628 END TEST nvme_reset 00:12:27.628 ************************************ 00:12:27.628 17:03:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:27.628 17:03:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:27.628 17:03:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.628 17:03:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.628 ************************************ 00:12:27.628 START TEST nvme_identify 00:12:27.628 ************************************ 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:12:27.628 17:03:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:27.628 17:03:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:27.628 17:03:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:27.628 17:03:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:27.628 17:03:19 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:27.628 17:03:20 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:27.628 17:03:20 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:27.628 17:03:20 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:27.890 [2024-07-25 17:03:20.319029] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68638 terminated unexpected 00:12:27.890 ===================================================== 00:12:27.890 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:27.890 ===================================================== 00:12:27.890 Controller Capabilities/Features 00:12:27.890 ================================ 00:12:27.890 Vendor ID: 1b36 00:12:27.890 Subsystem Vendor ID: 1af4 00:12:27.890 Serial Number: 12340 00:12:27.890 Model Number: QEMU NVMe Ctrl 00:12:27.890 Firmware Version: 8.0.0 00:12:27.890 Recommended Arb Burst: 6 00:12:27.890 IEEE OUI Identifier: 00 54 52 00:12:27.890 Multi-path I/O 00:12:27.890 May have multiple subsystem ports: No 00:12:27.890 May have multiple controllers: No 00:12:27.890 Associated with SR-IOV VF: No 00:12:27.890 Max Data Transfer Size: 524288 00:12:27.890 Max Number of Namespaces: 256 00:12:27.890 Max Number of I/O Queues: 64 00:12:27.890 NVMe Specification Version (VS): 1.4 00:12:27.890 NVMe Specification Version (Identify): 1.4 00:12:27.890 Maximum Queue Entries: 2048 00:12:27.890 Contiguous Queues Required: Yes 00:12:27.890 Arbitration Mechanisms Supported 00:12:27.890 Weighted Round Robin: Not Supported 00:12:27.890 Vendor Specific: Not Supported 00:12:27.890 Reset Timeout: 7500 ms 00:12:27.890 Doorbell Stride: 4 bytes 00:12:27.890 NVM Subsystem Reset: Not Supported 00:12:27.890 Command Sets Supported 00:12:27.890 NVM Command Set: Supported 00:12:27.890 Boot Partition: Not Supported 00:12:27.890 Memory Page Size Minimum: 4096 bytes 00:12:27.890 Memory Page Size Maximum: 65536 bytes 00:12:27.890 Persistent Memory Region: Not Supported 00:12:27.890 Optional Asynchronous Events Supported 00:12:27.890 Namespace Attribute Notices: Supported 00:12:27.890 Firmware Activation Notices: Not Supported 00:12:27.890 ANA Change Notices: Not Supported 00:12:27.890 PLE Aggregate Log Change Notices: Not Supported 00:12:27.890 LBA Status Info Alert Notices: Not Supported 00:12:27.890 EGE Aggregate Log Change Notices: Not Supported 00:12:27.890 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.890 Zone Descriptor Change Notices: Not Supported 00:12:27.890 Discovery Log Change Notices: Not Supported 00:12:27.890 Controller Attributes 00:12:27.890 128-bit Host Identifier: Not Supported 00:12:27.890 Non-Operational Permissive Mode: Not Supported 00:12:27.890 NVM Sets: Not Supported 00:12:27.890 Read Recovery Levels: Not Supported 00:12:27.890 Endurance Groups: Not Supported 00:12:27.890 Predictable Latency Mode: Not Supported 00:12:27.890 Traffic Based Keep ALive: Not Supported 00:12:27.890 Namespace Granularity: Not Supported 00:12:27.890 SQ Associations: Not Supported 00:12:27.890 UUID List: Not Supported 00:12:27.890 Multi-Domain Subsystem: Not Supported 00:12:27.890 Fixed Capacity Management: Not Supported 00:12:27.890 Variable Capacity Management: Not Supported 00:12:27.890 Delete Endurance Group: Not Supported 00:12:27.890 Delete NVM Set: Not Supported 00:12:27.890 Extended LBA Formats Supported: Supported 00:12:27.890 Flexible Data Placement Supported: Not Supported 00:12:27.890 00:12:27.890 Controller Memory Buffer Support 00:12:27.890 ================================ 00:12:27.890 Supported: No 00:12:27.890 00:12:27.890 Persistent Memory Region Support 00:12:27.890 ================================ 00:12:27.890 Supported: No 00:12:27.890 00:12:27.890 Admin Command Set Attributes 00:12:27.890 ============================ 00:12:27.890 Security Send/Receive: Not Supported 00:12:27.890 Format NVM: Supported 00:12:27.890 Firmware Activate/Download: Not Supported 00:12:27.890 Namespace Management: Supported 00:12:27.890 Device Self-Test: Not Supported 00:12:27.890 Directives: Supported 00:12:27.890 NVMe-MI: Not Supported 00:12:27.890 Virtualization Management: Not Supported 00:12:27.890 Doorbell Buffer Config: Supported 00:12:27.890 Get LBA Status Capability: Not Supported 00:12:27.890 Command & Feature Lockdown Capability: Not Supported 00:12:27.890 Abort Command Limit: 4 00:12:27.890 Async Event Request Limit: 4 00:12:27.890 Number of Firmware Slots: N/A 00:12:27.890 Firmware Slot 1 Read-Only: N/A 00:12:27.890 Firmware Activation Without Reset: N/A 00:12:27.890 Multiple Update Detection Support: N/A 00:12:27.890 Firmware Update Granularity: No Information Provided 00:12:27.890 Per-Namespace SMART Log: Yes 00:12:27.890 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.890 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:27.890 Command Effects Log Page: Supported 00:12:27.890 Get Log Page Extended Data: Supported 00:12:27.890 Telemetry Log Pages: Not Supported 00:12:27.890 Persistent Event Log Pages: Not Supported 00:12:27.890 Supported Log Pages Log Page: May Support 00:12:27.890 Commands Supported & Effects Log Page: Not Supported 00:12:27.890 Feature Identifiers & Effects Log Page:May Support 00:12:27.890 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.890 Data Area 4 for Telemetry Log: Not Supported 00:12:27.890 Error Log Page Entries Supported: 1 00:12:27.890 Keep Alive: Not Supported 00:12:27.890 00:12:27.890 NVM Command Set Attributes 00:12:27.890 ========================== 00:12:27.890 Submission Queue Entry Size 00:12:27.890 Max: 64 00:12:27.890 Min: 64 00:12:27.890 Completion Queue Entry Size 00:12:27.890 Max: 16 00:12:27.890 Min: 16 00:12:27.890 Number of Namespaces: 256 00:12:27.890 Compare Command: Supported 00:12:27.890 Write Uncorrectable Command: Not Supported 00:12:27.890 Dataset Management Command: Supported 00:12:27.890 Write Zeroes Command: Supported 00:12:27.890 Set Features Save Field: Supported 00:12:27.890 Reservations: Not Supported 00:12:27.890 Timestamp: Supported 00:12:27.890 Copy: Supported 00:12:27.890 Volatile Write Cache: Present 00:12:27.890 Atomic Write Unit (Normal): 1 00:12:27.890 Atomic Write Unit (PFail): 1 00:12:27.890 Atomic Compare & Write Unit: 1 00:12:27.890 Fused Compare & Write: Not Supported 00:12:27.890 Scatter-Gather List 00:12:27.890 SGL Command Set: Supported 00:12:27.890 SGL Keyed: Not Supported 00:12:27.890 SGL Bit Bucket Descriptor: Not Supported 00:12:27.890 SGL Metadata Pointer: Not Supported 00:12:27.890 Oversized SGL: Not Supported 00:12:27.890 SGL Metadata Address: Not Supported 00:12:27.890 SGL Offset: Not Supported 00:12:27.890 Transport SGL Data Block: Not Supported 00:12:27.890 Replay Protected Memory Block: Not Supported 00:12:27.890 00:12:27.890 Firmware Slot Information 00:12:27.890 ========================= 00:12:27.890 Active slot: 1 00:12:27.890 Slot 1 Firmware Revision: 1.0 00:12:27.890 00:12:27.890 00:12:27.890 Commands Supported and Effects 00:12:27.890 ============================== 00:12:27.891 Admin Commands 00:12:27.891 -------------- 00:12:27.891 Delete I/O Submission Queue (00h): Supported 00:12:27.891 Create I/O Submission Queue (01h): Supported 00:12:27.891 Get Log Page (02h): Supported 00:12:27.891 Delete I/O Completion Queue (04h): Supported 00:12:27.891 Create I/O Completion Queue (05h): Supported 00:12:27.891 Identify (06h): Supported 00:12:27.891 Abort (08h): Supported 00:12:27.891 Set Features (09h): Supported 00:12:27.891 Get Features (0Ah): Supported 00:12:27.891 Asynchronous Event Request (0Ch): Supported 00:12:27.891 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:27.891 Directive Send (19h): Supported 00:12:27.891 Directive Receive (1Ah): Supported 00:12:27.891 Virtualization Management (1Ch): Supported 00:12:27.891 Doorbell Buffer Config (7Ch): Supported 00:12:27.891 Format NVM (80h): Supported LBA-Change 00:12:27.891 I/O Commands 00:12:27.891 ------------ 00:12:27.891 Flush (00h): Supported LBA-Change 00:12:27.891 Write (01h): Supported LBA-Change 00:12:27.891 Read (02h): Supported 00:12:27.891 Compare (05h): Supported 00:12:27.891 Write Zeroes (08h): Supported LBA-Change 00:12:27.891 Dataset Management (09h): Supported LBA-Change 00:12:27.891 Unknown (0Ch): Supported 00:12:27.891 Unknown (12h): Supported 00:12:27.891 Copy (19h): Supported LBA-Change 00:12:27.891 Unknown (1Dh): Supported LBA-Change 00:12:27.891 00:12:27.891 Error Log 00:12:27.891 ========= 00:12:27.891 00:12:27.891 Arbitration 00:12:27.891 =========== 00:12:27.891 Arbitration Burst: no limit 00:12:27.891 00:12:27.891 Power Management 00:12:27.891 ================ 00:12:27.891 Number of Power States: 1 00:12:27.891 Current Power State: Power State #0 00:12:27.891 Power State #0: 00:12:27.891 Max Power: 25.00 W 00:12:27.891 Non-Operational State: Operational 00:12:27.891 Entry Latency: 16 microseconds 00:12:27.891 Exit Latency: 4 microseconds 00:12:27.891 Relative Read Throughput: 0 00:12:27.891 Relative Read Latency: 0 00:12:27.891 Relative Write Throughput: 0 00:12:27.891 Relative Write Latency: 0 00:12:27.891 Idle Power[2024-07-25 17:03:20.320773] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68638 terminated unexpected 00:12:27.891 : Not Reported 00:12:27.891 Active Power: Not Reported 00:12:27.891 Non-Operational Permissive Mode: Not Supported 00:12:27.891 00:12:27.891 Health Information 00:12:27.891 ================== 00:12:27.891 Critical Warnings: 00:12:27.891 Available Spare Space: OK 00:12:27.891 Temperature: OK 00:12:27.891 Device Reliability: OK 00:12:27.891 Read Only: No 00:12:27.891 Volatile Memory Backup: OK 00:12:27.891 Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.891 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:27.891 Available Spare: 0% 00:12:27.891 Available Spare Threshold: 0% 00:12:27.891 Life Percentage Used: 0% 00:12:27.891 Data Units Read: 745 00:12:27.891 Data Units Written: 636 00:12:27.891 Host Read Commands: 33599 00:12:27.891 Host Write Commands: 32637 00:12:27.891 Controller Busy Time: 0 minutes 00:12:27.891 Power Cycles: 0 00:12:27.891 Power On Hours: 0 hours 00:12:27.891 Unsafe Shutdowns: 0 00:12:27.891 Unrecoverable Media Errors: 0 00:12:27.891 Lifetime Error Log Entries: 0 00:12:27.891 Warning Temperature Time: 0 minutes 00:12:27.891 Critical Temperature Time: 0 minutes 00:12:27.891 00:12:27.891 Number of Queues 00:12:27.891 ================ 00:12:27.891 Number of I/O Submission Queues: 64 00:12:27.891 Number of I/O Completion Queues: 64 00:12:27.891 00:12:27.891 ZNS Specific Controller Data 00:12:27.891 ============================ 00:12:27.891 Zone Append Size Limit: 0 00:12:27.891 00:12:27.891 00:12:27.891 Active Namespaces 00:12:27.891 ================= 00:12:27.891 Namespace ID:1 00:12:27.891 Error Recovery Timeout: Unlimited 00:12:27.891 Command Set Identifier: NVM (00h) 00:12:27.891 Deallocate: Supported 00:12:27.891 Deallocated/Unwritten Error: Supported 00:12:27.891 Deallocated Read Value: All 0x00 00:12:27.891 Deallocate in Write Zeroes: Not Supported 00:12:27.891 Deallocated Guard Field: 0xFFFF 00:12:27.891 Flush: Supported 00:12:27.891 Reservation: Not Supported 00:12:27.891 Metadata Transferred as: Separate Metadata Buffer 00:12:27.891 Namespace Sharing Capabilities: Private 00:12:27.891 Size (in LBAs): 1548666 (5GiB) 00:12:27.891 Capacity (in LBAs): 1548666 (5GiB) 00:12:27.891 Utilization (in LBAs): 1548666 (5GiB) 00:12:27.891 Thin Provisioning: Not Supported 00:12:27.891 Per-NS Atomic Units: No 00:12:27.891 Maximum Single Source Range Length: 128 00:12:27.891 Maximum Copy Length: 128 00:12:27.891 Maximum Source Range Count: 128 00:12:27.891 NGUID/EUI64 Never Reused: No 00:12:27.891 Namespace Write Protected: No 00:12:27.891 Number of LBA Formats: 8 00:12:27.891 Current LBA Format: LBA Format #07 00:12:27.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.891 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:27.891 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:27.891 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:27.891 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:27.891 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:27.891 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:27.891 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:27.891 00:12:27.891 NVM Specific Namespace Data 00:12:27.891 =========================== 00:12:27.891 Logical Block Storage Tag Mask: 0 00:12:27.891 Protection Information Capabilities: 00:12:27.891 16b Guard Protection Information Storage Tag Support: No 00:12:27.891 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:27.891 Storage Tag Check Read Support: No 00:12:27.891 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.891 ===================================================== 00:12:27.891 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:27.891 ===================================================== 00:12:27.891 Controller Capabilities/Features 00:12:27.891 ================================ 00:12:27.891 Vendor ID: 1b36 00:12:27.891 Subsystem Vendor ID: 1af4 00:12:27.891 Serial Number: 12341 00:12:27.891 Model Number: QEMU NVMe Ctrl 00:12:27.891 Firmware Version: 8.0.0 00:12:27.891 Recommended Arb Burst: 6 00:12:27.891 IEEE OUI Identifier: 00 54 52 00:12:27.891 Multi-path I/O 00:12:27.891 May have multiple subsystem ports: No 00:12:27.891 May have multiple controllers: No 00:12:27.891 Associated with SR-IOV VF: No 00:12:27.891 Max Data Transfer Size: 524288 00:12:27.891 Max Number of Namespaces: 256 00:12:27.891 Max Number of I/O Queues: 64 00:12:27.891 NVMe Specification Version (VS): 1.4 00:12:27.891 NVMe Specification Version (Identify): 1.4 00:12:27.891 Maximum Queue Entries: 2048 00:12:27.891 Contiguous Queues Required: Yes 00:12:27.891 Arbitration Mechanisms Supported 00:12:27.891 Weighted Round Robin: Not Supported 00:12:27.891 Vendor Specific: Not Supported 00:12:27.891 Reset Timeout: 7500 ms 00:12:27.891 Doorbell Stride: 4 bytes 00:12:27.891 NVM Subsystem Reset: Not Supported 00:12:27.891 Command Sets Supported 00:12:27.891 NVM Command Set: Supported 00:12:27.891 Boot Partition: Not Supported 00:12:27.891 Memory Page Size Minimum: 4096 bytes 00:12:27.891 Memory Page Size Maximum: 65536 bytes 00:12:27.891 Persistent Memory Region: Not Supported 00:12:27.891 Optional Asynchronous Events Supported 00:12:27.891 Namespace Attribute Notices: Supported 00:12:27.891 Firmware Activation Notices: Not Supported 00:12:27.891 ANA Change Notices: Not Supported 00:12:27.891 PLE Aggregate Log Change Notices: Not Supported 00:12:27.891 LBA Status Info Alert Notices: Not Supported 00:12:27.891 EGE Aggregate Log Change Notices: Not Supported 00:12:27.891 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.891 Zone Descriptor Change Notices: Not Supported 00:12:27.891 Discovery Log Change Notices: Not Supported 00:12:27.891 Controller Attributes 00:12:27.891 128-bit Host Identifier: Not Supported 00:12:27.891 Non-Operational Permissive Mode: Not Supported 00:12:27.891 NVM Sets: Not Supported 00:12:27.892 Read Recovery Levels: Not Supported 00:12:27.892 Endurance Groups: Not Supported 00:12:27.892 Predictable Latency Mode: Not Supported 00:12:27.892 Traffic Based Keep ALive: Not Supported 00:12:27.892 Namespace Granularity: Not Supported 00:12:27.892 SQ Associations: Not Supported 00:12:27.892 UUID List: Not Supported 00:12:27.892 Multi-Domain Subsystem: Not Supported 00:12:27.892 Fixed Capacity Management: Not Supported 00:12:27.892 Variable Capacity Management: Not Supported 00:12:27.892 Delete Endurance Group: Not Supported 00:12:27.892 Delete NVM Set: Not Supported 00:12:27.892 Extended LBA Formats Supported: Supported 00:12:27.892 Flexible Data Placement Supported: Not Supported 00:12:27.892 00:12:27.892 Controller Memory Buffer Support 00:12:27.892 ================================ 00:12:27.892 Supported: No 00:12:27.892 00:12:27.892 Persistent Memory Region Support 00:12:27.892 ================================ 00:12:27.892 Supported: No 00:12:27.892 00:12:27.892 Admin Command Set Attributes 00:12:27.892 ============================ 00:12:27.892 Security Send/Receive: Not Supported 00:12:27.892 Format NVM: Supported 00:12:27.892 Firmware Activate/Download: Not Supported 00:12:27.892 Namespace Management: Supported 00:12:27.892 Device Self-Test: Not Supported 00:12:27.892 Directives: Supported 00:12:27.892 NVMe-MI: Not Supported 00:12:27.892 Virtualization Management: Not Supported 00:12:27.892 Doorbell Buffer Config: Supported 00:12:27.892 Get LBA Status Capability: Not Supported 00:12:27.892 Command & Feature Lockdown Capability: Not Supported 00:12:27.892 Abort Command Limit: 4 00:12:27.892 Async Event Request Limit: 4 00:12:27.892 Number of Firmware Slots: N/A 00:12:27.892 Firmware Slot 1 Read-Only: N/A 00:12:27.892 Firmware Activation Without Reset: N/A 00:12:27.892 Multiple Update Detection Support: N/A 00:12:27.892 Firmware Update Granularity: No Information Provided 00:12:27.892 Per-Namespace SMART Log: Yes 00:12:27.892 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.892 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:27.892 Command Effects Log Page: Supported 00:12:27.892 Get Log Page Extended Data: Supported 00:12:27.892 Telemetry Log Pages: Not Supported 00:12:27.892 Persistent Event Log Pages: Not Supported 00:12:27.892 Supported Log Pages Log Page: May Support 00:12:27.892 Commands Supported & Effects Log Page: Not Supported 00:12:27.892 Feature Identifiers & Effects Log Page:May Support 00:12:27.892 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.892 Data Area 4 for Telemetry Log: Not Supported 00:12:27.892 Error Log Page Entries Supported: 1 00:12:27.892 Keep Alive: Not Supported 00:12:27.892 00:12:27.892 NVM Command Set Attributes 00:12:27.892 ========================== 00:12:27.892 Submission Queue Entry Size 00:12:27.892 Max: 64 00:12:27.892 Min: 64 00:12:27.892 Completion Queue Entry Size 00:12:27.892 Max: 16 00:12:27.892 Min: 16 00:12:27.892 Number of Namespaces: 256 00:12:27.892 Compare Command: Supported 00:12:27.892 Write Uncorrectable Command: Not Supported 00:12:27.892 Dataset Management Command: Supported 00:12:27.892 Write Zeroes Command: Supported 00:12:27.892 Set Features Save Field: Supported 00:12:27.892 Reservations: Not Supported 00:12:27.892 Timestamp: Supported 00:12:27.892 Copy: Supported 00:12:27.892 Volatile Write Cache: Present 00:12:27.892 Atomic Write Unit (Normal): 1 00:12:27.892 Atomic Write Unit (PFail): 1 00:12:27.892 Atomic Compare & Write Unit: 1 00:12:27.892 Fused Compare & Write: Not Supported 00:12:27.892 Scatter-Gather List 00:12:27.892 SGL Command Set: Supported 00:12:27.892 SGL Keyed: Not Supported 00:12:27.892 SGL Bit Bucket Descriptor: Not Supported 00:12:27.892 SGL Metadata Pointer: Not Supported 00:12:27.892 Oversized SGL: Not Supported 00:12:27.892 SGL Metadata Address: Not Supported 00:12:27.892 SGL Offset: Not Supported 00:12:27.892 Transport SGL Data Block: Not Supported 00:12:27.892 Replay Protected Memory Block: Not Supported 00:12:27.892 00:12:27.892 Firmware Slot Information 00:12:27.892 ========================= 00:12:27.892 Active slot: 1 00:12:27.892 Slot 1 Firmware Revision: 1.0 00:12:27.892 00:12:27.892 00:12:27.892 Commands Supported and Effects 00:12:27.892 ============================== 00:12:27.892 Admin Commands 00:12:27.892 -------------- 00:12:27.892 Delete I/O Submission Queue (00h): Supported 00:12:27.892 Create I/O Submission Queue (01h): Supported 00:12:27.892 Get Log Page (02h): Supported 00:12:27.892 Delete I/O Completion Queue (04h): Supported 00:12:27.892 Create I/O Completion Queue (05h): Supported 00:12:27.892 Identify (06h): Supported 00:12:27.892 Abort (08h): Supported 00:12:27.892 Set Features (09h): Supported 00:12:27.892 Get Features (0Ah): Supported 00:12:27.892 Asynchronous Event Request (0Ch): Supported 00:12:27.892 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:27.892 Directive Send (19h): Supported 00:12:27.892 Directive Receive (1Ah): Supported 00:12:27.892 Virtualization Management (1Ch): Supported 00:12:27.892 Doorbell Buffer Config (7Ch): Supported 00:12:27.892 Format NVM (80h): Supported LBA-Change 00:12:27.892 I/O Commands 00:12:27.892 ------------ 00:12:27.892 Flush (00h): Supported LBA-Change 00:12:27.892 Write (01h): Supported LBA-Change 00:12:27.892 Read (02h): Supported 00:12:27.892 Compare (05h): Supported 00:12:27.892 Write Zeroes (08h): Supported LBA-Change 00:12:27.892 Dataset Management (09h): Supported LBA-Change 00:12:27.892 Unknown (0Ch): Supported 00:12:27.892 Unknown (12h): Supported 00:12:27.892 Copy (19h): Supported LBA-Change 00:12:27.892 Unknown (1Dh): Supported LBA-Change 00:12:27.892 00:12:27.892 Error Log 00:12:27.892 ========= 00:12:27.892 00:12:27.892 Arbitration 00:12:27.892 =========== 00:12:27.892 Arbitration Burst: no limit 00:12:27.892 00:12:27.892 Power Management 00:12:27.892 ================ 00:12:27.892 Number of Power States: 1 00:12:27.892 Current Power State: Power State #0 00:12:27.892 Power State #0: 00:12:27.892 Max Power: 25.00 W 00:12:27.892 Non-Operational State: Operational 00:12:27.892 Entry Latency: 16 microseconds 00:12:27.892 Exit Latency: 4 microseconds 00:12:27.892 Relative Read Throughput: 0 00:12:27.892 Relative Read Latency: 0 00:12:27.892 Relative Write Throughput: 0 00:12:27.892 Relative Write Latency: 0 00:12:27.892 Idle Power: Not Reported 00:12:27.892 Active Power: Not Reported 00:12:27.892 Non-Operational Permissive Mode: Not Supported 00:12:27.892 00:12:27.892 Health Information 00:12:27.892 ================== 00:12:27.892 Critical Warnings: 00:12:27.892 Available Spare Space: OK 00:12:27.892 Temperature: [2024-07-25 17:03:20.321809] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68638 terminated unexpected 00:12:27.892 OK 00:12:27.892 Device Reliability: OK 00:12:27.892 Read Only: No 00:12:27.892 Volatile Memory Backup: OK 00:12:27.892 Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.892 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:27.892 Available Spare: 0% 00:12:27.892 Available Spare Threshold: 0% 00:12:27.892 Life Percentage Used: 0% 00:12:27.892 Data Units Read: 1193 00:12:27.892 Data Units Written: 982 00:12:27.892 Host Read Commands: 52167 00:12:27.892 Host Write Commands: 49319 00:12:27.892 Controller Busy Time: 0 minutes 00:12:27.892 Power Cycles: 0 00:12:27.892 Power On Hours: 0 hours 00:12:27.892 Unsafe Shutdowns: 0 00:12:27.892 Unrecoverable Media Errors: 0 00:12:27.892 Lifetime Error Log Entries: 0 00:12:27.892 Warning Temperature Time: 0 minutes 00:12:27.892 Critical Temperature Time: 0 minutes 00:12:27.892 00:12:27.892 Number of Queues 00:12:27.892 ================ 00:12:27.892 Number of I/O Submission Queues: 64 00:12:27.892 Number of I/O Completion Queues: 64 00:12:27.892 00:12:27.892 ZNS Specific Controller Data 00:12:27.892 ============================ 00:12:27.892 Zone Append Size Limit: 0 00:12:27.892 00:12:27.892 00:12:27.892 Active Namespaces 00:12:27.892 ================= 00:12:27.892 Namespace ID:1 00:12:27.892 Error Recovery Timeout: Unlimited 00:12:27.892 Command Set Identifier: NVM (00h) 00:12:27.892 Deallocate: Supported 00:12:27.892 Deallocated/Unwritten Error: Supported 00:12:27.892 Deallocated Read Value: All 0x00 00:12:27.892 Deallocate in Write Zeroes: Not Supported 00:12:27.892 Deallocated Guard Field: 0xFFFF 00:12:27.892 Flush: Supported 00:12:27.892 Reservation: Not Supported 00:12:27.893 Namespace Sharing Capabilities: Private 00:12:27.893 Size (in LBAs): 1310720 (5GiB) 00:12:27.893 Capacity (in LBAs): 1310720 (5GiB) 00:12:27.893 Utilization (in LBAs): 1310720 (5GiB) 00:12:27.893 Thin Provisioning: Not Supported 00:12:27.893 Per-NS Atomic Units: No 00:12:27.893 Maximum Single Source Range Length: 128 00:12:27.893 Maximum Copy Length: 128 00:12:27.893 Maximum Source Range Count: 128 00:12:27.893 NGUID/EUI64 Never Reused: No 00:12:27.893 Namespace Write Protected: No 00:12:27.893 Number of LBA Formats: 8 00:12:27.893 Current LBA Format: LBA Format #04 00:12:27.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.893 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:27.893 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:27.893 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:27.893 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:27.893 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:27.893 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:27.893 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:27.893 00:12:27.893 NVM Specific Namespace Data 00:12:27.893 =========================== 00:12:27.893 Logical Block Storage Tag Mask: 0 00:12:27.893 Protection Information Capabilities: 00:12:27.893 16b Guard Protection Information Storage Tag Support: No 00:12:27.893 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:27.893 Storage Tag Check Read Support: No 00:12:27.893 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.893 ===================================================== 00:12:27.893 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:27.893 ===================================================== 00:12:27.893 Controller Capabilities/Features 00:12:27.893 ================================ 00:12:27.893 Vendor ID: 1b36 00:12:27.893 Subsystem Vendor ID: 1af4 00:12:27.893 Serial Number: 12343 00:12:27.893 Model Number: QEMU NVMe Ctrl 00:12:27.893 Firmware Version: 8.0.0 00:12:27.893 Recommended Arb Burst: 6 00:12:27.893 IEEE OUI Identifier: 00 54 52 00:12:27.893 Multi-path I/O 00:12:27.893 May have multiple subsystem ports: No 00:12:27.893 May have multiple controllers: Yes 00:12:27.893 Associated with SR-IOV VF: No 00:12:27.893 Max Data Transfer Size: 524288 00:12:27.893 Max Number of Namespaces: 256 00:12:27.893 Max Number of I/O Queues: 64 00:12:27.893 NVMe Specification Version (VS): 1.4 00:12:27.893 NVMe Specification Version (Identify): 1.4 00:12:27.893 Maximum Queue Entries: 2048 00:12:27.893 Contiguous Queues Required: Yes 00:12:27.893 Arbitration Mechanisms Supported 00:12:27.893 Weighted Round Robin: Not Supported 00:12:27.893 Vendor Specific: Not Supported 00:12:27.893 Reset Timeout: 7500 ms 00:12:27.893 Doorbell Stride: 4 bytes 00:12:27.893 NVM Subsystem Reset: Not Supported 00:12:27.893 Command Sets Supported 00:12:27.893 NVM Command Set: Supported 00:12:27.893 Boot Partition: Not Supported 00:12:27.893 Memory Page Size Minimum: 4096 bytes 00:12:27.893 Memory Page Size Maximum: 65536 bytes 00:12:27.893 Persistent Memory Region: Not Supported 00:12:27.893 Optional Asynchronous Events Supported 00:12:27.893 Namespace Attribute Notices: Supported 00:12:27.893 Firmware Activation Notices: Not Supported 00:12:27.893 ANA Change Notices: Not Supported 00:12:27.893 PLE Aggregate Log Change Notices: Not Supported 00:12:27.893 LBA Status Info Alert Notices: Not Supported 00:12:27.893 EGE Aggregate Log Change Notices: Not Supported 00:12:27.893 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.893 Zone Descriptor Change Notices: Not Supported 00:12:27.893 Discovery Log Change Notices: Not Supported 00:12:27.893 Controller Attributes 00:12:27.893 128-bit Host Identifier: Not Supported 00:12:27.893 Non-Operational Permissive Mode: Not Supported 00:12:27.893 NVM Sets: Not Supported 00:12:27.893 Read Recovery Levels: Not Supported 00:12:27.893 Endurance Groups: Supported 00:12:27.893 Predictable Latency Mode: Not Supported 00:12:27.893 Traffic Based Keep ALive: Not Supported 00:12:27.893 Namespace Granularity: Not Supported 00:12:27.893 SQ Associations: Not Supported 00:12:27.893 UUID List: Not Supported 00:12:27.893 Multi-Domain Subsystem: Not Supported 00:12:27.893 Fixed Capacity Management: Not Supported 00:12:27.893 Variable Capacity Management: Not Supported 00:12:27.893 Delete Endurance Group: Not Supported 00:12:27.893 Delete NVM Set: Not Supported 00:12:27.893 Extended LBA Formats Supported: Supported 00:12:27.893 Flexible Data Placement Supported: Supported 00:12:27.893 00:12:27.893 Controller Memory Buffer Support 00:12:27.893 ================================ 00:12:27.893 Supported: No 00:12:27.893 00:12:27.893 Persistent Memory Region Support 00:12:27.893 ================================ 00:12:27.893 Supported: No 00:12:27.893 00:12:27.893 Admin Command Set Attributes 00:12:27.893 ============================ 00:12:27.893 Security Send/Receive: Not Supported 00:12:27.893 Format NVM: Supported 00:12:27.893 Firmware Activate/Download: Not Supported 00:12:27.893 Namespace Management: Supported 00:12:27.893 Device Self-Test: Not Supported 00:12:27.893 Directives: Supported 00:12:27.893 NVMe-MI: Not Supported 00:12:27.893 Virtualization Management: Not Supported 00:12:27.893 Doorbell Buffer Config: Supported 00:12:27.893 Get LBA Status Capability: Not Supported 00:12:27.893 Command & Feature Lockdown Capability: Not Supported 00:12:27.893 Abort Command Limit: 4 00:12:27.893 Async Event Request Limit: 4 00:12:27.893 Number of Firmware Slots: N/A 00:12:27.893 Firmware Slot 1 Read-Only: N/A 00:12:27.893 Firmware Activation Without Reset: N/A 00:12:27.893 Multiple Update Detection Support: N/A 00:12:27.893 Firmware Update Granularity: No Information Provided 00:12:27.893 Per-Namespace SMART Log: Yes 00:12:27.893 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.893 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:27.893 Command Effects Log Page: Supported 00:12:27.893 Get Log Page Extended Data: Supported 00:12:27.893 Telemetry Log Pages: Not Supported 00:12:27.893 Persistent Event Log Pages: Not Supported 00:12:27.893 Supported Log Pages Log Page: May Support 00:12:27.893 Commands Supported & Effects Log Page: Not Supported 00:12:27.893 Feature Identifiers & Effects Log Page:May Support 00:12:27.893 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.893 Data Area 4 for Telemetry Log: Not Supported 00:12:27.893 Error Log Page Entries Supported: 1 00:12:27.893 Keep Alive: Not Supported 00:12:27.893 00:12:27.893 NVM Command Set Attributes 00:12:27.893 ========================== 00:12:27.893 Submission Queue Entry Size 00:12:27.893 Max: 64 00:12:27.893 Min: 64 00:12:27.893 Completion Queue Entry Size 00:12:27.893 Max: 16 00:12:27.893 Min: 16 00:12:27.893 Number of Namespaces: 256 00:12:27.893 Compare Command: Supported 00:12:27.893 Write Uncorrectable Command: Not Supported 00:12:27.893 Dataset Management Command: Supported 00:12:27.893 Write Zeroes Command: Supported 00:12:27.893 Set Features Save Field: Supported 00:12:27.893 Reservations: Not Supported 00:12:27.893 Timestamp: Supported 00:12:27.893 Copy: Supported 00:12:27.893 Volatile Write Cache: Present 00:12:27.893 Atomic Write Unit (Normal): 1 00:12:27.893 Atomic Write Unit (PFail): 1 00:12:27.893 Atomic Compare & Write Unit: 1 00:12:27.893 Fused Compare & Write: Not Supported 00:12:27.893 Scatter-Gather List 00:12:27.893 SGL Command Set: Supported 00:12:27.893 SGL Keyed: Not Supported 00:12:27.893 SGL Bit Bucket Descriptor: Not Supported 00:12:27.893 SGL Metadata Pointer: Not Supported 00:12:27.893 Oversized SGL: Not Supported 00:12:27.893 SGL Metadata Address: Not Supported 00:12:27.894 SGL Offset: Not Supported 00:12:27.894 Transport SGL Data Block: Not Supported 00:12:27.894 Replay Protected Memory Block: Not Supported 00:12:27.894 00:12:27.894 Firmware Slot Information 00:12:27.894 ========================= 00:12:27.894 Active slot: 1 00:12:27.894 Slot 1 Firmware Revision: 1.0 00:12:27.894 00:12:27.894 00:12:27.894 Commands Supported and Effects 00:12:27.894 ============================== 00:12:27.894 Admin Commands 00:12:27.894 -------------- 00:12:27.894 Delete I/O Submission Queue (00h): Supported 00:12:27.894 Create I/O Submission Queue (01h): Supported 00:12:27.894 Get Log Page (02h): Supported 00:12:27.894 Delete I/O Completion Queue (04h): Supported 00:12:27.894 Create I/O Completion Queue (05h): Supported 00:12:27.894 Identify (06h): Supported 00:12:27.894 Abort (08h): Supported 00:12:27.894 Set Features (09h): Supported 00:12:27.894 Get Features (0Ah): Supported 00:12:27.894 Asynchronous Event Request (0Ch): Supported 00:12:27.894 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:27.894 Directive Send (19h): Supported 00:12:27.894 Directive Receive (1Ah): Supported 00:12:27.894 Virtualization Management (1Ch): Supported 00:12:27.894 Doorbell Buffer Config (7Ch): Supported 00:12:27.894 Format NVM (80h): Supported LBA-Change 00:12:27.894 I/O Commands 00:12:27.894 ------------ 00:12:27.894 Flush (00h): Supported LBA-Change 00:12:27.894 Write (01h): Supported LBA-Change 00:12:27.894 Read (02h): Supported 00:12:27.894 Compare (05h): Supported 00:12:27.894 Write Zeroes (08h): Supported LBA-Change 00:12:27.894 Dataset Management (09h): Supported LBA-Change 00:12:27.894 Unknown (0Ch): Supported 00:12:27.894 Unknown (12h): Supported 00:12:27.894 Copy (19h): Supported LBA-Change 00:12:27.894 Unknown (1Dh): Supported LBA-Change 00:12:27.894 00:12:27.894 Error Log 00:12:27.894 ========= 00:12:27.894 00:12:27.894 Arbitration 00:12:27.894 =========== 00:12:27.894 Arbitration Burst: no limit 00:12:27.894 00:12:27.894 Power Management 00:12:27.894 ================ 00:12:27.894 Number of Power States: 1 00:12:27.894 Current Power State: Power State #0 00:12:27.894 Power State #0: 00:12:27.894 Max Power: 25.00 W 00:12:27.894 Non-Operational State: Operational 00:12:27.894 Entry Latency: 16 microseconds 00:12:27.894 Exit Latency: 4 microseconds 00:12:27.894 Relative Read Throughput: 0 00:12:27.894 Relative Read Latency: 0 00:12:27.894 Relative Write Throughput: 0 00:12:27.894 Relative Write Latency: 0 00:12:27.894 Idle Power: Not Reported 00:12:27.894 Active Power: Not Reported 00:12:27.894 Non-Operational Permissive Mode: Not Supported 00:12:27.894 00:12:27.894 Health Information 00:12:27.894 ================== 00:12:27.894 Critical Warnings: 00:12:27.894 Available Spare Space: OK 00:12:27.894 Temperature: OK 00:12:27.894 Device Reliability: OK 00:12:27.894 Read Only: No 00:12:27.894 Volatile Memory Backup: OK 00:12:27.894 Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.894 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:27.894 Available Spare: 0% 00:12:27.894 Available Spare Threshold: 0% 00:12:27.894 Life Percentage Used: 0% 00:12:27.894 Data Units Read: 866 00:12:27.894 Data Units Written: 760 00:12:27.894 Host Read Commands: 34941 00:12:27.894 Host Write Commands: 33531 00:12:27.894 Controller Busy Time: 0 minutes 00:12:27.894 Power Cycles: 0 00:12:27.894 Power On Hours: 0 hours 00:12:27.894 Unsafe Shutdowns: 0 00:12:27.894 Unrecoverable Media Errors: 0 00:12:27.894 Lifetime Error Log Entries: 0 00:12:27.894 Warning Temperature Time: 0 minutes 00:12:27.894 Critical Temperature Time: 0 minutes 00:12:27.894 00:12:27.894 Number of Queues 00:12:27.894 ================ 00:12:27.894 Number of I/O Submission Queues: 64 00:12:27.894 Number of I/O Completion Queues: 64 00:12:27.894 00:12:27.894 ZNS Specific Controller Data 00:12:27.894 ============================ 00:12:27.894 Zone Append Size Limit: 0 00:12:27.894 00:12:27.894 00:12:27.894 Active Namespaces 00:12:27.894 ================= 00:12:27.894 Namespace ID:1 00:12:27.894 Error Recovery Timeout: Unlimited 00:12:27.894 Command Set Identifier: NVM (00h) 00:12:27.894 Deallocate: Supported 00:12:27.894 Deallocated/Unwritten Error: Supported 00:12:27.894 Deallocated Read Value: All 0x00 00:12:27.894 Deallocate in Write Zeroes: Not Supported 00:12:27.894 Deallocated Guard Field: 0xFFFF 00:12:27.894 Flush: Supported 00:12:27.894 Reservation: Not Supported 00:12:27.894 Namespace Sharing Capabilities: Multiple Controllers 00:12:27.894 Size (in LBAs): 262144 (1GiB) 00:12:27.894 Capacity (in LBAs): 262144 (1GiB) 00:12:27.894 Utilization (in LBAs): 262144 (1GiB) 00:12:27.894 Thin Provisioning: Not Supported 00:12:27.894 Per-NS Atomic Units: No 00:12:27.894 Maximum Single Source Range Length: 128 00:12:27.894 Maximum Copy Length: 128 00:12:27.894 Maximum Source Range Count: 128 00:12:27.894 NGUID/EUI64 Never Reused: No 00:12:27.894 Namespace Write Protected: No 00:12:27.894 Endurance group ID: 1 00:12:27.894 Number of LBA Formats: 8 00:12:27.894 Current LBA Format: LBA Format #04 00:12:27.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.894 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:27.894 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:27.894 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:27.894 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:27.894 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:27.894 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:27.894 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:27.894 00:12:27.894 Get Feature FDP: 00:12:27.894 ================ 00:12:27.894 Enabled: Yes 00:12:27.894 FDP configuration index: 0 00:12:27.894 00:12:27.894 FDP configurations log page 00:12:27.894 =========================== 00:12:27.894 Number of FDP configurations: 1 00:12:27.894 Version: 0 00:12:27.894 Size: 112 00:12:27.894 FDP Configuration Descriptor: 0 00:12:27.894 Descriptor Size: 96 00:12:27.894 Reclaim Group Identifier format: 2 00:12:27.894 FDP Volatile Write Cache: Not Present 00:12:27.894 FDP Configuration: Valid 00:12:27.894 Vendor Specific Size: 0 00:12:27.894 Number of Reclaim Groups: 2 00:12:27.894 Number of Recalim Unit Handles: 8 00:12:27.894 Max Placement Identifiers: 128 00:12:27.894 Number of Namespaces Suppprted: 256 00:12:27.894 Reclaim unit Nominal Size: 6000000 bytes 00:12:27.894 Estimated Reclaim Unit Time Limit: Not Reported 00:12:27.894 RUH Desc #000: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #001: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #002: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #003: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #004: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #005: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #006: RUH Type: Initially Isolated 00:12:27.894 RUH Desc #007: RUH Type: Initially Isolated 00:12:27.894 00:12:27.894 FDP reclaim unit handle usage log page 00:12:27.894 ====================================== 00:12:27.894 Number of Reclaim Unit Handles: 8 00:12:27.895 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:27.895 RUH Usage Desc #001: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #002: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #003: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #004: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #005: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #006: RUH Attributes: Unused 00:12:27.895 RUH Usage Desc #007: RUH Attributes: Unused 00:12:27.895 00:12:27.895 FDP statistics log page 00:12:27.895 ======================= 00:12:27.895 Host bytes with metadata written: 471113728 00:12:27.895 Medi[2024-07-25 17:03:20.324169] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68638 terminated unexpected 00:12:27.895 a bytes with metadata written: 471179264 00:12:27.895 Media bytes erased: 0 00:12:27.895 00:12:27.895 FDP events log page 00:12:27.895 =================== 00:12:27.895 Number of FDP events: 0 00:12:27.895 00:12:27.895 NVM Specific Namespace Data 00:12:27.895 =========================== 00:12:27.895 Logical Block Storage Tag Mask: 0 00:12:27.895 Protection Information Capabilities: 00:12:27.895 16b Guard Protection Information Storage Tag Support: No 00:12:27.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:27.895 Storage Tag Check Read Support: No 00:12:27.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.895 ===================================================== 00:12:27.895 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:27.895 ===================================================== 00:12:27.895 Controller Capabilities/Features 00:12:27.895 ================================ 00:12:27.895 Vendor ID: 1b36 00:12:27.895 Subsystem Vendor ID: 1af4 00:12:27.895 Serial Number: 12342 00:12:27.895 Model Number: QEMU NVMe Ctrl 00:12:27.895 Firmware Version: 8.0.0 00:12:27.895 Recommended Arb Burst: 6 00:12:27.895 IEEE OUI Identifier: 00 54 52 00:12:27.895 Multi-path I/O 00:12:27.895 May have multiple subsystem ports: No 00:12:27.895 May have multiple controllers: No 00:12:27.895 Associated with SR-IOV VF: No 00:12:27.895 Max Data Transfer Size: 524288 00:12:27.895 Max Number of Namespaces: 256 00:12:27.895 Max Number of I/O Queues: 64 00:12:27.895 NVMe Specification Version (VS): 1.4 00:12:27.895 NVMe Specification Version (Identify): 1.4 00:12:27.895 Maximum Queue Entries: 2048 00:12:27.895 Contiguous Queues Required: Yes 00:12:27.895 Arbitration Mechanisms Supported 00:12:27.895 Weighted Round Robin: Not Supported 00:12:27.895 Vendor Specific: Not Supported 00:12:27.895 Reset Timeout: 7500 ms 00:12:27.895 Doorbell Stride: 4 bytes 00:12:27.895 NVM Subsystem Reset: Not Supported 00:12:27.895 Command Sets Supported 00:12:27.895 NVM Command Set: Supported 00:12:27.895 Boot Partition: Not Supported 00:12:27.895 Memory Page Size Minimum: 4096 bytes 00:12:27.895 Memory Page Size Maximum: 65536 bytes 00:12:27.895 Persistent Memory Region: Not Supported 00:12:27.895 Optional Asynchronous Events Supported 00:12:27.895 Namespace Attribute Notices: Supported 00:12:27.895 Firmware Activation Notices: Not Supported 00:12:27.895 ANA Change Notices: Not Supported 00:12:27.895 PLE Aggregate Log Change Notices: Not Supported 00:12:27.895 LBA Status Info Alert Notices: Not Supported 00:12:27.895 EGE Aggregate Log Change Notices: Not Supported 00:12:27.895 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.895 Zone Descriptor Change Notices: Not Supported 00:12:27.895 Discovery Log Change Notices: Not Supported 00:12:27.895 Controller Attributes 00:12:27.895 128-bit Host Identifier: Not Supported 00:12:27.895 Non-Operational Permissive Mode: Not Supported 00:12:27.895 NVM Sets: Not Supported 00:12:27.895 Read Recovery Levels: Not Supported 00:12:27.895 Endurance Groups: Not Supported 00:12:27.895 Predictable Latency Mode: Not Supported 00:12:27.895 Traffic Based Keep ALive: Not Supported 00:12:27.895 Namespace Granularity: Not Supported 00:12:27.895 SQ Associations: Not Supported 00:12:27.895 UUID List: Not Supported 00:12:27.895 Multi-Domain Subsystem: Not Supported 00:12:27.895 Fixed Capacity Management: Not Supported 00:12:27.895 Variable Capacity Management: Not Supported 00:12:27.895 Delete Endurance Group: Not Supported 00:12:27.895 Delete NVM Set: Not Supported 00:12:27.895 Extended LBA Formats Supported: Supported 00:12:27.895 Flexible Data Placement Supported: Not Supported 00:12:27.895 00:12:27.895 Controller Memory Buffer Support 00:12:27.895 ================================ 00:12:27.895 Supported: No 00:12:27.895 00:12:27.895 Persistent Memory Region Support 00:12:27.895 ================================ 00:12:27.895 Supported: No 00:12:27.895 00:12:27.895 Admin Command Set Attributes 00:12:27.895 ============================ 00:12:27.895 Security Send/Receive: Not Supported 00:12:27.895 Format NVM: Supported 00:12:27.895 Firmware Activate/Download: Not Supported 00:12:27.895 Namespace Management: Supported 00:12:27.895 Device Self-Test: Not Supported 00:12:27.895 Directives: Supported 00:12:27.895 NVMe-MI: Not Supported 00:12:27.895 Virtualization Management: Not Supported 00:12:27.895 Doorbell Buffer Config: Supported 00:12:27.895 Get LBA Status Capability: Not Supported 00:12:27.895 Command & Feature Lockdown Capability: Not Supported 00:12:27.895 Abort Command Limit: 4 00:12:27.895 Async Event Request Limit: 4 00:12:27.895 Number of Firmware Slots: N/A 00:12:27.895 Firmware Slot 1 Read-Only: N/A 00:12:27.895 Firmware Activation Without Reset: N/A 00:12:27.895 Multiple Update Detection Support: N/A 00:12:27.895 Firmware Update Granularity: No Information Provided 00:12:27.895 Per-Namespace SMART Log: Yes 00:12:27.895 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.895 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:27.895 Command Effects Log Page: Supported 00:12:27.895 Get Log Page Extended Data: Supported 00:12:27.895 Telemetry Log Pages: Not Supported 00:12:27.895 Persistent Event Log Pages: Not Supported 00:12:27.895 Supported Log Pages Log Page: May Support 00:12:27.895 Commands Supported & Effects Log Page: Not Supported 00:12:27.896 Feature Identifiers & Effects Log Page:May Support 00:12:27.896 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.896 Data Area 4 for Telemetry Log: Not Supported 00:12:27.896 Error Log Page Entries Supported: 1 00:12:27.896 Keep Alive: Not Supported 00:12:27.896 00:12:27.896 NVM Command Set Attributes 00:12:27.896 ========================== 00:12:27.896 Submission Queue Entry Size 00:12:27.896 Max: 64 00:12:27.896 Min: 64 00:12:27.896 Completion Queue Entry Size 00:12:27.896 Max: 16 00:12:27.896 Min: 16 00:12:27.896 Number of Namespaces: 256 00:12:27.896 Compare Command: Supported 00:12:27.896 Write Uncorrectable Command: Not Supported 00:12:27.896 Dataset Management Command: Supported 00:12:27.896 Write Zeroes Command: Supported 00:12:27.896 Set Features Save Field: Supported 00:12:27.896 Reservations: Not Supported 00:12:27.896 Timestamp: Supported 00:12:27.896 Copy: Supported 00:12:27.896 Volatile Write Cache: Present 00:12:27.896 Atomic Write Unit (Normal): 1 00:12:27.896 Atomic Write Unit (PFail): 1 00:12:27.896 Atomic Compare & Write Unit: 1 00:12:27.896 Fused Compare & Write: Not Supported 00:12:27.896 Scatter-Gather List 00:12:27.896 SGL Command Set: Supported 00:12:27.896 SGL Keyed: Not Supported 00:12:27.896 SGL Bit Bucket Descriptor: Not Supported 00:12:27.896 SGL Metadata Pointer: Not Supported 00:12:27.896 Oversized SGL: Not Supported 00:12:27.896 SGL Metadata Address: Not Supported 00:12:27.896 SGL Offset: Not Supported 00:12:27.896 Transport SGL Data Block: Not Supported 00:12:27.896 Replay Protected Memory Block: Not Supported 00:12:27.896 00:12:27.896 Firmware Slot Information 00:12:27.896 ========================= 00:12:27.896 Active slot: 1 00:12:27.896 Slot 1 Firmware Revision: 1.0 00:12:27.896 00:12:27.896 00:12:27.896 Commands Supported and Effects 00:12:27.896 ============================== 00:12:27.896 Admin Commands 00:12:27.896 -------------- 00:12:27.896 Delete I/O Submission Queue (00h): Supported 00:12:27.896 Create I/O Submission Queue (01h): Supported 00:12:27.896 Get Log Page (02h): Supported 00:12:27.896 Delete I/O Completion Queue (04h): Supported 00:12:27.896 Create I/O Completion Queue (05h): Supported 00:12:27.896 Identify (06h): Supported 00:12:27.896 Abort (08h): Supported 00:12:27.896 Set Features (09h): Supported 00:12:27.896 Get Features (0Ah): Supported 00:12:27.896 Asynchronous Event Request (0Ch): Supported 00:12:27.896 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:27.896 Directive Send (19h): Supported 00:12:27.896 Directive Receive (1Ah): Supported 00:12:27.896 Virtualization Management (1Ch): Supported 00:12:27.896 Doorbell Buffer Config (7Ch): Supported 00:12:27.896 Format NVM (80h): Supported LBA-Change 00:12:27.896 I/O Commands 00:12:27.896 ------------ 00:12:27.896 Flush (00h): Supported LBA-Change 00:12:27.896 Write (01h): Supported LBA-Change 00:12:27.896 Read (02h): Supported 00:12:27.896 Compare (05h): Supported 00:12:27.896 Write Zeroes (08h): Supported LBA-Change 00:12:27.896 Dataset Management (09h): Supported LBA-Change 00:12:27.896 Unknown (0Ch): Supported 00:12:27.896 Unknown (12h): Supported 00:12:27.896 Copy (19h): Supported LBA-Change 00:12:27.896 Unknown (1Dh): Supported LBA-Change 00:12:27.896 00:12:27.896 Error Log 00:12:27.896 ========= 00:12:27.896 00:12:27.896 Arbitration 00:12:27.896 =========== 00:12:27.896 Arbitration Burst: no limit 00:12:27.896 00:12:27.896 Power Management 00:12:27.896 ================ 00:12:27.896 Number of Power States: 1 00:12:27.896 Current Power State: Power State #0 00:12:27.896 Power State #0: 00:12:27.896 Max Power: 25.00 W 00:12:27.896 Non-Operational State: Operational 00:12:27.896 Entry Latency: 16 microseconds 00:12:27.896 Exit Latency: 4 microseconds 00:12:27.896 Relative Read Throughput: 0 00:12:27.896 Relative Read Latency: 0 00:12:27.896 Relative Write Throughput: 0 00:12:27.896 Relative Write Latency: 0 00:12:27.896 Idle Power: Not Reported 00:12:27.896 Active Power: Not Reported 00:12:27.896 Non-Operational Permissive Mode: Not Supported 00:12:27.896 00:12:27.896 Health Information 00:12:27.896 ================== 00:12:27.896 Critical Warnings: 00:12:27.896 Available Spare Space: OK 00:12:27.896 Temperature: OK 00:12:27.896 Device Reliability: OK 00:12:27.896 Read Only: No 00:12:27.896 Volatile Memory Backup: OK 00:12:27.896 Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.896 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:27.896 Available Spare: 0% 00:12:27.896 Available Spare Threshold: 0% 00:12:27.896 Life Percentage Used: 0% 00:12:27.896 Data Units Read: 2332 00:12:27.896 Data Units Written: 2012 00:12:27.896 Host Read Commands: 102682 00:12:27.896 Host Write Commands: 98454 00:12:27.896 Controller Busy Time: 0 minutes 00:12:27.896 Power Cycles: 0 00:12:27.896 Power On Hours: 0 hours 00:12:27.896 Unsafe Shutdowns: 0 00:12:27.896 Unrecoverable Media Errors: 0 00:12:27.896 Lifetime Error Log Entries: 0 00:12:27.896 Warning Temperature Time: 0 minutes 00:12:27.896 Critical Temperature Time: 0 minutes 00:12:27.896 00:12:27.896 Number of Queues 00:12:27.896 ================ 00:12:27.896 Number of I/O Submission Queues: 64 00:12:27.896 Number of I/O Completion Queues: 64 00:12:27.896 00:12:27.896 ZNS Specific Controller Data 00:12:27.896 ============================ 00:12:27.896 Zone Append Size Limit: 0 00:12:27.896 00:12:27.896 00:12:27.896 Active Namespaces 00:12:27.896 ================= 00:12:27.896 Namespace ID:1 00:12:27.896 Error Recovery Timeout: Unlimited 00:12:27.896 Command Set Identifier: NVM (00h) 00:12:27.896 Deallocate: Supported 00:12:27.896 Deallocated/Unwritten Error: Supported 00:12:27.896 Deallocated Read Value: All 0x00 00:12:27.896 Deallocate in Write Zeroes: Not Supported 00:12:27.896 Deallocated Guard Field: 0xFFFF 00:12:27.896 Flush: Supported 00:12:27.896 Reservation: Not Supported 00:12:27.896 Namespace Sharing Capabilities: Private 00:12:27.896 Size (in LBAs): 1048576 (4GiB) 00:12:27.896 Capacity (in LBAs): 1048576 (4GiB) 00:12:27.896 Utilization (in LBAs): 1048576 (4GiB) 00:12:27.896 Thin Provisioning: Not Supported 00:12:27.896 Per-NS Atomic Units: No 00:12:27.896 Maximum Single Source Range Length: 128 00:12:27.896 Maximum Copy Length: 128 00:12:27.896 Maximum Source Range Count: 128 00:12:27.896 NGUID/EUI64 Never Reused: No 00:12:27.896 Namespace Write Protected: No 00:12:27.896 Number of LBA Formats: 8 00:12:27.896 Current LBA Format: LBA Format #04 00:12:27.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.896 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:27.896 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:27.896 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:27.896 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:27.896 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:27.896 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:27.896 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:27.896 00:12:27.896 NVM Specific Namespace Data 00:12:27.896 =========================== 00:12:27.896 Logical Block Storage Tag Mask: 0 00:12:27.896 Protection Information Capabilities: 00:12:27.896 16b Guard Protection Information Storage Tag Support: No 00:12:27.896 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:27.896 Storage Tag Check Read Support: No 00:12:27.896 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.896 Namespace ID:2 00:12:27.896 Error Recovery Timeout: Unlimited 00:12:27.896 Command Set Identifier: NVM (00h) 00:12:27.896 Deallocate: Supported 00:12:27.896 Deallocated/Unwritten Error: Supported 00:12:27.896 Deallocated Read Value: All 0x00 00:12:27.896 Deallocate in Write Zeroes: Not Supported 00:12:27.896 Deallocated Guard Field: 0xFFFF 00:12:27.896 Flush: Supported 00:12:27.896 Reservation: Not Supported 00:12:27.896 Namespace Sharing Capabilities: Private 00:12:27.896 Size (in LBAs): 1048576 (4GiB) 00:12:27.896 Capacity (in LBAs): 1048576 (4GiB) 00:12:27.896 Utilization (in LBAs): 1048576 (4GiB) 00:12:27.897 Thin Provisioning: Not Supported 00:12:27.897 Per-NS Atomic Units: No 00:12:27.897 Maximum Single Source Range Length: 128 00:12:27.897 Maximum Copy Length: 128 00:12:27.897 Maximum Source Range Count: 128 00:12:27.897 NGUID/EUI64 Never Reused: No 00:12:27.897 Namespace Write Protected: No 00:12:27.897 Number of LBA Formats: 8 00:12:27.897 Current LBA Format: LBA Format #04 00:12:27.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:27.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:27.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:27.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:27.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:27.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:27.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:27.897 00:12:27.897 NVM Specific Namespace Data 00:12:27.897 =========================== 00:12:27.897 Logical Block Storage Tag Mask: 0 00:12:27.897 Protection Information Capabilities: 00:12:27.897 16b Guard Protection Information Storage Tag Support: No 00:12:27.897 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:27.897 Storage Tag Check Read Support: No 00:12:27.897 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:27.897 Namespace ID:3 00:12:27.897 Error Recovery Timeout: Unlimited 00:12:27.897 Command Set Identifier: NVM (00h) 00:12:27.897 Deallocate: Supported 00:12:27.897 Deallocated/Unwritten Error: Supported 00:12:27.897 Deallocated Read Value: All 0x00 00:12:27.897 Deallocate in Write Zeroes: Not Supported 00:12:27.897 Deallocated Guard Field: 0xFFFF 00:12:27.897 Flush: Supported 00:12:27.897 Reservation: Not Supported 00:12:27.897 Namespace Sharing Capabilities: Private 00:12:27.897 Size (in LBAs): 1048576 (4GiB) 00:12:28.156 Capacity (in LBAs): 1048576 (4GiB) 00:12:28.156 Utilization (in LBAs): 1048576 (4GiB) 00:12:28.156 Thin Provisioning: Not Supported 00:12:28.156 Per-NS Atomic Units: No 00:12:28.156 Maximum Single Source Range Length: 128 00:12:28.156 Maximum Copy Length: 128 00:12:28.156 Maximum Source Range Count: 128 00:12:28.156 NGUID/EUI64 Never Reused: No 00:12:28.156 Namespace Write Protected: No 00:12:28.156 Number of LBA Formats: 8 00:12:28.156 Current LBA Format: LBA Format #04 00:12:28.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.156 00:12:28.156 NVM Specific Namespace Data 00:12:28.156 =========================== 00:12:28.156 Logical Block Storage Tag Mask: 0 00:12:28.156 Protection Information Capabilities: 00:12:28.156 16b Guard Protection Information Storage Tag Support: No 00:12:28.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:28.156 Storage Tag Check Read Support: No 00:12:28.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.156 17:03:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:28.156 17:03:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:28.415 ===================================================== 00:12:28.415 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:28.415 ===================================================== 00:12:28.415 Controller Capabilities/Features 00:12:28.415 ================================ 00:12:28.415 Vendor ID: 1b36 00:12:28.415 Subsystem Vendor ID: 1af4 00:12:28.415 Serial Number: 12340 00:12:28.415 Model Number: QEMU NVMe Ctrl 00:12:28.415 Firmware Version: 8.0.0 00:12:28.415 Recommended Arb Burst: 6 00:12:28.415 IEEE OUI Identifier: 00 54 52 00:12:28.415 Multi-path I/O 00:12:28.415 May have multiple subsystem ports: No 00:12:28.415 May have multiple controllers: No 00:12:28.415 Associated with SR-IOV VF: No 00:12:28.415 Max Data Transfer Size: 524288 00:12:28.415 Max Number of Namespaces: 256 00:12:28.415 Max Number of I/O Queues: 64 00:12:28.415 NVMe Specification Version (VS): 1.4 00:12:28.416 NVMe Specification Version (Identify): 1.4 00:12:28.416 Maximum Queue Entries: 2048 00:12:28.416 Contiguous Queues Required: Yes 00:12:28.416 Arbitration Mechanisms Supported 00:12:28.416 Weighted Round Robin: Not Supported 00:12:28.416 Vendor Specific: Not Supported 00:12:28.416 Reset Timeout: 7500 ms 00:12:28.416 Doorbell Stride: 4 bytes 00:12:28.416 NVM Subsystem Reset: Not Supported 00:12:28.416 Command Sets Supported 00:12:28.416 NVM Command Set: Supported 00:12:28.416 Boot Partition: Not Supported 00:12:28.416 Memory Page Size Minimum: 4096 bytes 00:12:28.416 Memory Page Size Maximum: 65536 bytes 00:12:28.416 Persistent Memory Region: Not Supported 00:12:28.416 Optional Asynchronous Events Supported 00:12:28.416 Namespace Attribute Notices: Supported 00:12:28.416 Firmware Activation Notices: Not Supported 00:12:28.416 ANA Change Notices: Not Supported 00:12:28.416 PLE Aggregate Log Change Notices: Not Supported 00:12:28.416 LBA Status Info Alert Notices: Not Supported 00:12:28.416 EGE Aggregate Log Change Notices: Not Supported 00:12:28.416 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.416 Zone Descriptor Change Notices: Not Supported 00:12:28.416 Discovery Log Change Notices: Not Supported 00:12:28.416 Controller Attributes 00:12:28.416 128-bit Host Identifier: Not Supported 00:12:28.416 Non-Operational Permissive Mode: Not Supported 00:12:28.416 NVM Sets: Not Supported 00:12:28.416 Read Recovery Levels: Not Supported 00:12:28.416 Endurance Groups: Not Supported 00:12:28.416 Predictable Latency Mode: Not Supported 00:12:28.416 Traffic Based Keep ALive: Not Supported 00:12:28.416 Namespace Granularity: Not Supported 00:12:28.416 SQ Associations: Not Supported 00:12:28.416 UUID List: Not Supported 00:12:28.416 Multi-Domain Subsystem: Not Supported 00:12:28.416 Fixed Capacity Management: Not Supported 00:12:28.416 Variable Capacity Management: Not Supported 00:12:28.416 Delete Endurance Group: Not Supported 00:12:28.416 Delete NVM Set: Not Supported 00:12:28.416 Extended LBA Formats Supported: Supported 00:12:28.416 Flexible Data Placement Supported: Not Supported 00:12:28.416 00:12:28.416 Controller Memory Buffer Support 00:12:28.416 ================================ 00:12:28.416 Supported: No 00:12:28.416 00:12:28.416 Persistent Memory Region Support 00:12:28.416 ================================ 00:12:28.416 Supported: No 00:12:28.416 00:12:28.416 Admin Command Set Attributes 00:12:28.416 ============================ 00:12:28.416 Security Send/Receive: Not Supported 00:12:28.416 Format NVM: Supported 00:12:28.416 Firmware Activate/Download: Not Supported 00:12:28.416 Namespace Management: Supported 00:12:28.416 Device Self-Test: Not Supported 00:12:28.416 Directives: Supported 00:12:28.416 NVMe-MI: Not Supported 00:12:28.416 Virtualization Management: Not Supported 00:12:28.416 Doorbell Buffer Config: Supported 00:12:28.416 Get LBA Status Capability: Not Supported 00:12:28.416 Command & Feature Lockdown Capability: Not Supported 00:12:28.416 Abort Command Limit: 4 00:12:28.416 Async Event Request Limit: 4 00:12:28.416 Number of Firmware Slots: N/A 00:12:28.416 Firmware Slot 1 Read-Only: N/A 00:12:28.416 Firmware Activation Without Reset: N/A 00:12:28.416 Multiple Update Detection Support: N/A 00:12:28.416 Firmware Update Granularity: No Information Provided 00:12:28.416 Per-Namespace SMART Log: Yes 00:12:28.416 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.416 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:28.416 Command Effects Log Page: Supported 00:12:28.416 Get Log Page Extended Data: Supported 00:12:28.416 Telemetry Log Pages: Not Supported 00:12:28.416 Persistent Event Log Pages: Not Supported 00:12:28.416 Supported Log Pages Log Page: May Support 00:12:28.416 Commands Supported & Effects Log Page: Not Supported 00:12:28.416 Feature Identifiers & Effects Log Page:May Support 00:12:28.416 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.416 Data Area 4 for Telemetry Log: Not Supported 00:12:28.416 Error Log Page Entries Supported: 1 00:12:28.416 Keep Alive: Not Supported 00:12:28.416 00:12:28.416 NVM Command Set Attributes 00:12:28.416 ========================== 00:12:28.416 Submission Queue Entry Size 00:12:28.416 Max: 64 00:12:28.416 Min: 64 00:12:28.416 Completion Queue Entry Size 00:12:28.416 Max: 16 00:12:28.416 Min: 16 00:12:28.416 Number of Namespaces: 256 00:12:28.416 Compare Command: Supported 00:12:28.416 Write Uncorrectable Command: Not Supported 00:12:28.416 Dataset Management Command: Supported 00:12:28.416 Write Zeroes Command: Supported 00:12:28.416 Set Features Save Field: Supported 00:12:28.416 Reservations: Not Supported 00:12:28.416 Timestamp: Supported 00:12:28.416 Copy: Supported 00:12:28.416 Volatile Write Cache: Present 00:12:28.416 Atomic Write Unit (Normal): 1 00:12:28.416 Atomic Write Unit (PFail): 1 00:12:28.416 Atomic Compare & Write Unit: 1 00:12:28.416 Fused Compare & Write: Not Supported 00:12:28.416 Scatter-Gather List 00:12:28.416 SGL Command Set: Supported 00:12:28.416 SGL Keyed: Not Supported 00:12:28.416 SGL Bit Bucket Descriptor: Not Supported 00:12:28.416 SGL Metadata Pointer: Not Supported 00:12:28.416 Oversized SGL: Not Supported 00:12:28.416 SGL Metadata Address: Not Supported 00:12:28.416 SGL Offset: Not Supported 00:12:28.416 Transport SGL Data Block: Not Supported 00:12:28.416 Replay Protected Memory Block: Not Supported 00:12:28.416 00:12:28.416 Firmware Slot Information 00:12:28.416 ========================= 00:12:28.416 Active slot: 1 00:12:28.416 Slot 1 Firmware Revision: 1.0 00:12:28.416 00:12:28.416 00:12:28.416 Commands Supported and Effects 00:12:28.416 ============================== 00:12:28.416 Admin Commands 00:12:28.416 -------------- 00:12:28.416 Delete I/O Submission Queue (00h): Supported 00:12:28.416 Create I/O Submission Queue (01h): Supported 00:12:28.416 Get Log Page (02h): Supported 00:12:28.416 Delete I/O Completion Queue (04h): Supported 00:12:28.416 Create I/O Completion Queue (05h): Supported 00:12:28.416 Identify (06h): Supported 00:12:28.416 Abort (08h): Supported 00:12:28.416 Set Features (09h): Supported 00:12:28.416 Get Features (0Ah): Supported 00:12:28.416 Asynchronous Event Request (0Ch): Supported 00:12:28.416 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:28.416 Directive Send (19h): Supported 00:12:28.416 Directive Receive (1Ah): Supported 00:12:28.416 Virtualization Management (1Ch): Supported 00:12:28.416 Doorbell Buffer Config (7Ch): Supported 00:12:28.416 Format NVM (80h): Supported LBA-Change 00:12:28.416 I/O Commands 00:12:28.416 ------------ 00:12:28.416 Flush (00h): Supported LBA-Change 00:12:28.416 Write (01h): Supported LBA-Change 00:12:28.416 Read (02h): Supported 00:12:28.416 Compare (05h): Supported 00:12:28.416 Write Zeroes (08h): Supported LBA-Change 00:12:28.416 Dataset Management (09h): Supported LBA-Change 00:12:28.416 Unknown (0Ch): Supported 00:12:28.416 Unknown (12h): Supported 00:12:28.416 Copy (19h): Supported LBA-Change 00:12:28.416 Unknown (1Dh): Supported LBA-Change 00:12:28.416 00:12:28.416 Error Log 00:12:28.416 ========= 00:12:28.416 00:12:28.416 Arbitration 00:12:28.416 =========== 00:12:28.416 Arbitration Burst: no limit 00:12:28.416 00:12:28.416 Power Management 00:12:28.416 ================ 00:12:28.416 Number of Power States: 1 00:12:28.416 Current Power State: Power State #0 00:12:28.416 Power State #0: 00:12:28.416 Max Power: 25.00 W 00:12:28.416 Non-Operational State: Operational 00:12:28.416 Entry Latency: 16 microseconds 00:12:28.416 Exit Latency: 4 microseconds 00:12:28.416 Relative Read Throughput: 0 00:12:28.416 Relative Read Latency: 0 00:12:28.416 Relative Write Throughput: 0 00:12:28.416 Relative Write Latency: 0 00:12:28.416 Idle Power: Not Reported 00:12:28.416 Active Power: Not Reported 00:12:28.416 Non-Operational Permissive Mode: Not Supported 00:12:28.416 00:12:28.416 Health Information 00:12:28.416 ================== 00:12:28.416 Critical Warnings: 00:12:28.416 Available Spare Space: OK 00:12:28.416 Temperature: OK 00:12:28.416 Device Reliability: OK 00:12:28.416 Read Only: No 00:12:28.416 Volatile Memory Backup: OK 00:12:28.416 Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.416 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:28.416 Available Spare: 0% 00:12:28.416 Available Spare Threshold: 0% 00:12:28.416 Life Percentage Used: 0% 00:12:28.416 Data Units Read: 745 00:12:28.416 Data Units Written: 636 00:12:28.416 Host Read Commands: 33599 00:12:28.416 Host Write Commands: 32637 00:12:28.417 Controller Busy Time: 0 minutes 00:12:28.417 Power Cycles: 0 00:12:28.417 Power On Hours: 0 hours 00:12:28.417 Unsafe Shutdowns: 0 00:12:28.417 Unrecoverable Media Errors: 0 00:12:28.417 Lifetime Error Log Entries: 0 00:12:28.417 Warning Temperature Time: 0 minutes 00:12:28.417 Critical Temperature Time: 0 minutes 00:12:28.417 00:12:28.417 Number of Queues 00:12:28.417 ================ 00:12:28.417 Number of I/O Submission Queues: 64 00:12:28.417 Number of I/O Completion Queues: 64 00:12:28.417 00:12:28.417 ZNS Specific Controller Data 00:12:28.417 ============================ 00:12:28.417 Zone Append Size Limit: 0 00:12:28.417 00:12:28.417 00:12:28.417 Active Namespaces 00:12:28.417 ================= 00:12:28.417 Namespace ID:1 00:12:28.417 Error Recovery Timeout: Unlimited 00:12:28.417 Command Set Identifier: NVM (00h) 00:12:28.417 Deallocate: Supported 00:12:28.417 Deallocated/Unwritten Error: Supported 00:12:28.417 Deallocated Read Value: All 0x00 00:12:28.417 Deallocate in Write Zeroes: Not Supported 00:12:28.417 Deallocated Guard Field: 0xFFFF 00:12:28.417 Flush: Supported 00:12:28.417 Reservation: Not Supported 00:12:28.417 Metadata Transferred as: Separate Metadata Buffer 00:12:28.417 Namespace Sharing Capabilities: Private 00:12:28.417 Size (in LBAs): 1548666 (5GiB) 00:12:28.417 Capacity (in LBAs): 1548666 (5GiB) 00:12:28.417 Utilization (in LBAs): 1548666 (5GiB) 00:12:28.417 Thin Provisioning: Not Supported 00:12:28.417 Per-NS Atomic Units: No 00:12:28.417 Maximum Single Source Range Length: 128 00:12:28.417 Maximum Copy Length: 128 00:12:28.417 Maximum Source Range Count: 128 00:12:28.417 NGUID/EUI64 Never Reused: No 00:12:28.417 Namespace Write Protected: No 00:12:28.417 Number of LBA Formats: 8 00:12:28.417 Current LBA Format: LBA Format #07 00:12:28.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.417 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.417 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.417 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.417 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.417 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.417 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.417 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.417 00:12:28.417 NVM Specific Namespace Data 00:12:28.417 =========================== 00:12:28.417 Logical Block Storage Tag Mask: 0 00:12:28.417 Protection Information Capabilities: 00:12:28.417 16b Guard Protection Information Storage Tag Support: No 00:12:28.417 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:28.417 Storage Tag Check Read Support: No 00:12:28.417 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.417 17:03:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:28.417 17:03:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:28.677 ===================================================== 00:12:28.677 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:28.677 ===================================================== 00:12:28.677 Controller Capabilities/Features 00:12:28.677 ================================ 00:12:28.677 Vendor ID: 1b36 00:12:28.677 Subsystem Vendor ID: 1af4 00:12:28.677 Serial Number: 12341 00:12:28.677 Model Number: QEMU NVMe Ctrl 00:12:28.677 Firmware Version: 8.0.0 00:12:28.677 Recommended Arb Burst: 6 00:12:28.677 IEEE OUI Identifier: 00 54 52 00:12:28.677 Multi-path I/O 00:12:28.677 May have multiple subsystem ports: No 00:12:28.677 May have multiple controllers: No 00:12:28.677 Associated with SR-IOV VF: No 00:12:28.677 Max Data Transfer Size: 524288 00:12:28.677 Max Number of Namespaces: 256 00:12:28.677 Max Number of I/O Queues: 64 00:12:28.677 NVMe Specification Version (VS): 1.4 00:12:28.677 NVMe Specification Version (Identify): 1.4 00:12:28.677 Maximum Queue Entries: 2048 00:12:28.677 Contiguous Queues Required: Yes 00:12:28.677 Arbitration Mechanisms Supported 00:12:28.677 Weighted Round Robin: Not Supported 00:12:28.677 Vendor Specific: Not Supported 00:12:28.677 Reset Timeout: 7500 ms 00:12:28.677 Doorbell Stride: 4 bytes 00:12:28.677 NVM Subsystem Reset: Not Supported 00:12:28.677 Command Sets Supported 00:12:28.677 NVM Command Set: Supported 00:12:28.677 Boot Partition: Not Supported 00:12:28.677 Memory Page Size Minimum: 4096 bytes 00:12:28.677 Memory Page Size Maximum: 65536 bytes 00:12:28.677 Persistent Memory Region: Not Supported 00:12:28.677 Optional Asynchronous Events Supported 00:12:28.677 Namespace Attribute Notices: Supported 00:12:28.677 Firmware Activation Notices: Not Supported 00:12:28.677 ANA Change Notices: Not Supported 00:12:28.677 PLE Aggregate Log Change Notices: Not Supported 00:12:28.677 LBA Status Info Alert Notices: Not Supported 00:12:28.677 EGE Aggregate Log Change Notices: Not Supported 00:12:28.677 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.677 Zone Descriptor Change Notices: Not Supported 00:12:28.677 Discovery Log Change Notices: Not Supported 00:12:28.677 Controller Attributes 00:12:28.677 128-bit Host Identifier: Not Supported 00:12:28.677 Non-Operational Permissive Mode: Not Supported 00:12:28.677 NVM Sets: Not Supported 00:12:28.677 Read Recovery Levels: Not Supported 00:12:28.677 Endurance Groups: Not Supported 00:12:28.677 Predictable Latency Mode: Not Supported 00:12:28.677 Traffic Based Keep ALive: Not Supported 00:12:28.677 Namespace Granularity: Not Supported 00:12:28.677 SQ Associations: Not Supported 00:12:28.677 UUID List: Not Supported 00:12:28.677 Multi-Domain Subsystem: Not Supported 00:12:28.677 Fixed Capacity Management: Not Supported 00:12:28.677 Variable Capacity Management: Not Supported 00:12:28.677 Delete Endurance Group: Not Supported 00:12:28.677 Delete NVM Set: Not Supported 00:12:28.677 Extended LBA Formats Supported: Supported 00:12:28.677 Flexible Data Placement Supported: Not Supported 00:12:28.677 00:12:28.677 Controller Memory Buffer Support 00:12:28.677 ================================ 00:12:28.677 Supported: No 00:12:28.677 00:12:28.677 Persistent Memory Region Support 00:12:28.677 ================================ 00:12:28.677 Supported: No 00:12:28.677 00:12:28.677 Admin Command Set Attributes 00:12:28.677 ============================ 00:12:28.677 Security Send/Receive: Not Supported 00:12:28.677 Format NVM: Supported 00:12:28.677 Firmware Activate/Download: Not Supported 00:12:28.677 Namespace Management: Supported 00:12:28.677 Device Self-Test: Not Supported 00:12:28.677 Directives: Supported 00:12:28.677 NVMe-MI: Not Supported 00:12:28.677 Virtualization Management: Not Supported 00:12:28.677 Doorbell Buffer Config: Supported 00:12:28.677 Get LBA Status Capability: Not Supported 00:12:28.677 Command & Feature Lockdown Capability: Not Supported 00:12:28.677 Abort Command Limit: 4 00:12:28.677 Async Event Request Limit: 4 00:12:28.677 Number of Firmware Slots: N/A 00:12:28.677 Firmware Slot 1 Read-Only: N/A 00:12:28.677 Firmware Activation Without Reset: N/A 00:12:28.677 Multiple Update Detection Support: N/A 00:12:28.677 Firmware Update Granularity: No Information Provided 00:12:28.677 Per-Namespace SMART Log: Yes 00:12:28.677 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.677 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:28.677 Command Effects Log Page: Supported 00:12:28.677 Get Log Page Extended Data: Supported 00:12:28.677 Telemetry Log Pages: Not Supported 00:12:28.677 Persistent Event Log Pages: Not Supported 00:12:28.677 Supported Log Pages Log Page: May Support 00:12:28.677 Commands Supported & Effects Log Page: Not Supported 00:12:28.677 Feature Identifiers & Effects Log Page:May Support 00:12:28.677 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.677 Data Area 4 for Telemetry Log: Not Supported 00:12:28.677 Error Log Page Entries Supported: 1 00:12:28.677 Keep Alive: Not Supported 00:12:28.677 00:12:28.677 NVM Command Set Attributes 00:12:28.677 ========================== 00:12:28.677 Submission Queue Entry Size 00:12:28.677 Max: 64 00:12:28.677 Min: 64 00:12:28.677 Completion Queue Entry Size 00:12:28.677 Max: 16 00:12:28.677 Min: 16 00:12:28.677 Number of Namespaces: 256 00:12:28.677 Compare Command: Supported 00:12:28.677 Write Uncorrectable Command: Not Supported 00:12:28.677 Dataset Management Command: Supported 00:12:28.677 Write Zeroes Command: Supported 00:12:28.677 Set Features Save Field: Supported 00:12:28.677 Reservations: Not Supported 00:12:28.677 Timestamp: Supported 00:12:28.677 Copy: Supported 00:12:28.677 Volatile Write Cache: Present 00:12:28.678 Atomic Write Unit (Normal): 1 00:12:28.678 Atomic Write Unit (PFail): 1 00:12:28.678 Atomic Compare & Write Unit: 1 00:12:28.678 Fused Compare & Write: Not Supported 00:12:28.678 Scatter-Gather List 00:12:28.678 SGL Command Set: Supported 00:12:28.678 SGL Keyed: Not Supported 00:12:28.678 SGL Bit Bucket Descriptor: Not Supported 00:12:28.678 SGL Metadata Pointer: Not Supported 00:12:28.678 Oversized SGL: Not Supported 00:12:28.678 SGL Metadata Address: Not Supported 00:12:28.678 SGL Offset: Not Supported 00:12:28.678 Transport SGL Data Block: Not Supported 00:12:28.678 Replay Protected Memory Block: Not Supported 00:12:28.678 00:12:28.678 Firmware Slot Information 00:12:28.678 ========================= 00:12:28.678 Active slot: 1 00:12:28.678 Slot 1 Firmware Revision: 1.0 00:12:28.678 00:12:28.678 00:12:28.678 Commands Supported and Effects 00:12:28.678 ============================== 00:12:28.678 Admin Commands 00:12:28.678 -------------- 00:12:28.678 Delete I/O Submission Queue (00h): Supported 00:12:28.678 Create I/O Submission Queue (01h): Supported 00:12:28.678 Get Log Page (02h): Supported 00:12:28.678 Delete I/O Completion Queue (04h): Supported 00:12:28.678 Create I/O Completion Queue (05h): Supported 00:12:28.678 Identify (06h): Supported 00:12:28.678 Abort (08h): Supported 00:12:28.678 Set Features (09h): Supported 00:12:28.678 Get Features (0Ah): Supported 00:12:28.678 Asynchronous Event Request (0Ch): Supported 00:12:28.678 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:28.678 Directive Send (19h): Supported 00:12:28.678 Directive Receive (1Ah): Supported 00:12:28.678 Virtualization Management (1Ch): Supported 00:12:28.678 Doorbell Buffer Config (7Ch): Supported 00:12:28.678 Format NVM (80h): Supported LBA-Change 00:12:28.678 I/O Commands 00:12:28.678 ------------ 00:12:28.678 Flush (00h): Supported LBA-Change 00:12:28.678 Write (01h): Supported LBA-Change 00:12:28.678 Read (02h): Supported 00:12:28.678 Compare (05h): Supported 00:12:28.678 Write Zeroes (08h): Supported LBA-Change 00:12:28.678 Dataset Management (09h): Supported LBA-Change 00:12:28.678 Unknown (0Ch): Supported 00:12:28.678 Unknown (12h): Supported 00:12:28.678 Copy (19h): Supported LBA-Change 00:12:28.678 Unknown (1Dh): Supported LBA-Change 00:12:28.678 00:12:28.678 Error Log 00:12:28.678 ========= 00:12:28.678 00:12:28.678 Arbitration 00:12:28.678 =========== 00:12:28.678 Arbitration Burst: no limit 00:12:28.678 00:12:28.678 Power Management 00:12:28.678 ================ 00:12:28.678 Number of Power States: 1 00:12:28.678 Current Power State: Power State #0 00:12:28.678 Power State #0: 00:12:28.678 Max Power: 25.00 W 00:12:28.678 Non-Operational State: Operational 00:12:28.678 Entry Latency: 16 microseconds 00:12:28.678 Exit Latency: 4 microseconds 00:12:28.678 Relative Read Throughput: 0 00:12:28.678 Relative Read Latency: 0 00:12:28.678 Relative Write Throughput: 0 00:12:28.678 Relative Write Latency: 0 00:12:28.678 Idle Power: Not Reported 00:12:28.678 Active Power: Not Reported 00:12:28.678 Non-Operational Permissive Mode: Not Supported 00:12:28.678 00:12:28.678 Health Information 00:12:28.678 ================== 00:12:28.678 Critical Warnings: 00:12:28.678 Available Spare Space: OK 00:12:28.678 Temperature: OK 00:12:28.678 Device Reliability: OK 00:12:28.678 Read Only: No 00:12:28.678 Volatile Memory Backup: OK 00:12:28.678 Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.678 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:28.678 Available Spare: 0% 00:12:28.678 Available Spare Threshold: 0% 00:12:28.678 Life Percentage Used: 0% 00:12:28.678 Data Units Read: 1193 00:12:28.678 Data Units Written: 982 00:12:28.678 Host Read Commands: 52167 00:12:28.678 Host Write Commands: 49319 00:12:28.678 Controller Busy Time: 0 minutes 00:12:28.678 Power Cycles: 0 00:12:28.678 Power On Hours: 0 hours 00:12:28.678 Unsafe Shutdowns: 0 00:12:28.678 Unrecoverable Media Errors: 0 00:12:28.678 Lifetime Error Log Entries: 0 00:12:28.678 Warning Temperature Time: 0 minutes 00:12:28.678 Critical Temperature Time: 0 minutes 00:12:28.678 00:12:28.678 Number of Queues 00:12:28.678 ================ 00:12:28.678 Number of I/O Submission Queues: 64 00:12:28.678 Number of I/O Completion Queues: 64 00:12:28.678 00:12:28.678 ZNS Specific Controller Data 00:12:28.678 ============================ 00:12:28.678 Zone Append Size Limit: 0 00:12:28.678 00:12:28.678 00:12:28.678 Active Namespaces 00:12:28.678 ================= 00:12:28.678 Namespace ID:1 00:12:28.678 Error Recovery Timeout: Unlimited 00:12:28.678 Command Set Identifier: NVM (00h) 00:12:28.678 Deallocate: Supported 00:12:28.678 Deallocated/Unwritten Error: Supported 00:12:28.678 Deallocated Read Value: All 0x00 00:12:28.678 Deallocate in Write Zeroes: Not Supported 00:12:28.678 Deallocated Guard Field: 0xFFFF 00:12:28.678 Flush: Supported 00:12:28.678 Reservation: Not Supported 00:12:28.678 Namespace Sharing Capabilities: Private 00:12:28.678 Size (in LBAs): 1310720 (5GiB) 00:12:28.678 Capacity (in LBAs): 1310720 (5GiB) 00:12:28.678 Utilization (in LBAs): 1310720 (5GiB) 00:12:28.678 Thin Provisioning: Not Supported 00:12:28.678 Per-NS Atomic Units: No 00:12:28.678 Maximum Single Source Range Length: 128 00:12:28.678 Maximum Copy Length: 128 00:12:28.678 Maximum Source Range Count: 128 00:12:28.678 NGUID/EUI64 Never Reused: No 00:12:28.678 Namespace Write Protected: No 00:12:28.678 Number of LBA Formats: 8 00:12:28.678 Current LBA Format: LBA Format #04 00:12:28.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.678 00:12:28.678 NVM Specific Namespace Data 00:12:28.678 =========================== 00:12:28.678 Logical Block Storage Tag Mask: 0 00:12:28.678 Protection Information Capabilities: 00:12:28.678 16b Guard Protection Information Storage Tag Support: No 00:12:28.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:28.678 Storage Tag Check Read Support: No 00:12:28.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.678 17:03:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:28.678 17:03:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:28.938 ===================================================== 00:12:28.938 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:28.938 ===================================================== 00:12:28.939 Controller Capabilities/Features 00:12:28.939 ================================ 00:12:28.939 Vendor ID: 1b36 00:12:28.939 Subsystem Vendor ID: 1af4 00:12:28.939 Serial Number: 12342 00:12:28.939 Model Number: QEMU NVMe Ctrl 00:12:28.939 Firmware Version: 8.0.0 00:12:28.939 Recommended Arb Burst: 6 00:12:28.939 IEEE OUI Identifier: 00 54 52 00:12:28.939 Multi-path I/O 00:12:28.939 May have multiple subsystem ports: No 00:12:28.939 May have multiple controllers: No 00:12:28.939 Associated with SR-IOV VF: No 00:12:28.939 Max Data Transfer Size: 524288 00:12:28.939 Max Number of Namespaces: 256 00:12:28.939 Max Number of I/O Queues: 64 00:12:28.939 NVMe Specification Version (VS): 1.4 00:12:28.939 NVMe Specification Version (Identify): 1.4 00:12:28.939 Maximum Queue Entries: 2048 00:12:28.939 Contiguous Queues Required: Yes 00:12:28.939 Arbitration Mechanisms Supported 00:12:28.939 Weighted Round Robin: Not Supported 00:12:28.939 Vendor Specific: Not Supported 00:12:28.939 Reset Timeout: 7500 ms 00:12:28.939 Doorbell Stride: 4 bytes 00:12:28.939 NVM Subsystem Reset: Not Supported 00:12:28.939 Command Sets Supported 00:12:28.939 NVM Command Set: Supported 00:12:28.939 Boot Partition: Not Supported 00:12:28.939 Memory Page Size Minimum: 4096 bytes 00:12:28.939 Memory Page Size Maximum: 65536 bytes 00:12:28.939 Persistent Memory Region: Not Supported 00:12:28.939 Optional Asynchronous Events Supported 00:12:28.939 Namespace Attribute Notices: Supported 00:12:28.939 Firmware Activation Notices: Not Supported 00:12:28.939 ANA Change Notices: Not Supported 00:12:28.939 PLE Aggregate Log Change Notices: Not Supported 00:12:28.939 LBA Status Info Alert Notices: Not Supported 00:12:28.939 EGE Aggregate Log Change Notices: Not Supported 00:12:28.939 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.939 Zone Descriptor Change Notices: Not Supported 00:12:28.939 Discovery Log Change Notices: Not Supported 00:12:28.939 Controller Attributes 00:12:28.939 128-bit Host Identifier: Not Supported 00:12:28.939 Non-Operational Permissive Mode: Not Supported 00:12:28.939 NVM Sets: Not Supported 00:12:28.939 Read Recovery Levels: Not Supported 00:12:28.939 Endurance Groups: Not Supported 00:12:28.939 Predictable Latency Mode: Not Supported 00:12:28.939 Traffic Based Keep ALive: Not Supported 00:12:28.939 Namespace Granularity: Not Supported 00:12:28.939 SQ Associations: Not Supported 00:12:28.939 UUID List: Not Supported 00:12:28.939 Multi-Domain Subsystem: Not Supported 00:12:28.939 Fixed Capacity Management: Not Supported 00:12:28.939 Variable Capacity Management: Not Supported 00:12:28.939 Delete Endurance Group: Not Supported 00:12:28.939 Delete NVM Set: Not Supported 00:12:28.939 Extended LBA Formats Supported: Supported 00:12:28.939 Flexible Data Placement Supported: Not Supported 00:12:28.939 00:12:28.939 Controller Memory Buffer Support 00:12:28.939 ================================ 00:12:28.939 Supported: No 00:12:28.939 00:12:28.939 Persistent Memory Region Support 00:12:28.939 ================================ 00:12:28.939 Supported: No 00:12:28.939 00:12:28.939 Admin Command Set Attributes 00:12:28.939 ============================ 00:12:28.939 Security Send/Receive: Not Supported 00:12:28.939 Format NVM: Supported 00:12:28.939 Firmware Activate/Download: Not Supported 00:12:28.939 Namespace Management: Supported 00:12:28.939 Device Self-Test: Not Supported 00:12:28.939 Directives: Supported 00:12:28.939 NVMe-MI: Not Supported 00:12:28.939 Virtualization Management: Not Supported 00:12:28.939 Doorbell Buffer Config: Supported 00:12:28.939 Get LBA Status Capability: Not Supported 00:12:28.939 Command & Feature Lockdown Capability: Not Supported 00:12:28.939 Abort Command Limit: 4 00:12:28.939 Async Event Request Limit: 4 00:12:28.939 Number of Firmware Slots: N/A 00:12:28.939 Firmware Slot 1 Read-Only: N/A 00:12:28.939 Firmware Activation Without Reset: N/A 00:12:28.939 Multiple Update Detection Support: N/A 00:12:28.939 Firmware Update Granularity: No Information Provided 00:12:28.939 Per-Namespace SMART Log: Yes 00:12:28.939 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.939 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:28.939 Command Effects Log Page: Supported 00:12:28.939 Get Log Page Extended Data: Supported 00:12:28.939 Telemetry Log Pages: Not Supported 00:12:28.939 Persistent Event Log Pages: Not Supported 00:12:28.939 Supported Log Pages Log Page: May Support 00:12:28.939 Commands Supported & Effects Log Page: Not Supported 00:12:28.939 Feature Identifiers & Effects Log Page:May Support 00:12:28.939 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.939 Data Area 4 for Telemetry Log: Not Supported 00:12:28.939 Error Log Page Entries Supported: 1 00:12:28.939 Keep Alive: Not Supported 00:12:28.939 00:12:28.939 NVM Command Set Attributes 00:12:28.939 ========================== 00:12:28.939 Submission Queue Entry Size 00:12:28.939 Max: 64 00:12:28.939 Min: 64 00:12:28.939 Completion Queue Entry Size 00:12:28.939 Max: 16 00:12:28.939 Min: 16 00:12:28.939 Number of Namespaces: 256 00:12:28.939 Compare Command: Supported 00:12:28.939 Write Uncorrectable Command: Not Supported 00:12:28.939 Dataset Management Command: Supported 00:12:28.939 Write Zeroes Command: Supported 00:12:28.939 Set Features Save Field: Supported 00:12:28.939 Reservations: Not Supported 00:12:28.939 Timestamp: Supported 00:12:28.939 Copy: Supported 00:12:28.939 Volatile Write Cache: Present 00:12:28.939 Atomic Write Unit (Normal): 1 00:12:28.939 Atomic Write Unit (PFail): 1 00:12:28.939 Atomic Compare & Write Unit: 1 00:12:28.939 Fused Compare & Write: Not Supported 00:12:28.939 Scatter-Gather List 00:12:28.939 SGL Command Set: Supported 00:12:28.939 SGL Keyed: Not Supported 00:12:28.939 SGL Bit Bucket Descriptor: Not Supported 00:12:28.939 SGL Metadata Pointer: Not Supported 00:12:28.939 Oversized SGL: Not Supported 00:12:28.939 SGL Metadata Address: Not Supported 00:12:28.939 SGL Offset: Not Supported 00:12:28.939 Transport SGL Data Block: Not Supported 00:12:28.939 Replay Protected Memory Block: Not Supported 00:12:28.939 00:12:28.939 Firmware Slot Information 00:12:28.939 ========================= 00:12:28.939 Active slot: 1 00:12:28.939 Slot 1 Firmware Revision: 1.0 00:12:28.939 00:12:28.939 00:12:28.939 Commands Supported and Effects 00:12:28.939 ============================== 00:12:28.939 Admin Commands 00:12:28.939 -------------- 00:12:28.939 Delete I/O Submission Queue (00h): Supported 00:12:28.939 Create I/O Submission Queue (01h): Supported 00:12:28.939 Get Log Page (02h): Supported 00:12:28.939 Delete I/O Completion Queue (04h): Supported 00:12:28.939 Create I/O Completion Queue (05h): Supported 00:12:28.939 Identify (06h): Supported 00:12:28.939 Abort (08h): Supported 00:12:28.939 Set Features (09h): Supported 00:12:28.939 Get Features (0Ah): Supported 00:12:28.939 Asynchronous Event Request (0Ch): Supported 00:12:28.939 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:28.939 Directive Send (19h): Supported 00:12:28.939 Directive Receive (1Ah): Supported 00:12:28.939 Virtualization Management (1Ch): Supported 00:12:28.939 Doorbell Buffer Config (7Ch): Supported 00:12:28.939 Format NVM (80h): Supported LBA-Change 00:12:28.939 I/O Commands 00:12:28.939 ------------ 00:12:28.939 Flush (00h): Supported LBA-Change 00:12:28.939 Write (01h): Supported LBA-Change 00:12:28.939 Read (02h): Supported 00:12:28.939 Compare (05h): Supported 00:12:28.939 Write Zeroes (08h): Supported LBA-Change 00:12:28.939 Dataset Management (09h): Supported LBA-Change 00:12:28.939 Unknown (0Ch): Supported 00:12:28.939 Unknown (12h): Supported 00:12:28.939 Copy (19h): Supported LBA-Change 00:12:28.939 Unknown (1Dh): Supported LBA-Change 00:12:28.939 00:12:28.939 Error Log 00:12:28.939 ========= 00:12:28.939 00:12:28.939 Arbitration 00:12:28.939 =========== 00:12:28.939 Arbitration Burst: no limit 00:12:28.939 00:12:28.939 Power Management 00:12:28.939 ================ 00:12:28.939 Number of Power States: 1 00:12:28.939 Current Power State: Power State #0 00:12:28.939 Power State #0: 00:12:28.939 Max Power: 25.00 W 00:12:28.939 Non-Operational State: Operational 00:12:28.939 Entry Latency: 16 microseconds 00:12:28.939 Exit Latency: 4 microseconds 00:12:28.939 Relative Read Throughput: 0 00:12:28.940 Relative Read Latency: 0 00:12:28.940 Relative Write Throughput: 0 00:12:28.940 Relative Write Latency: 0 00:12:28.940 Idle Power: Not Reported 00:12:28.940 Active Power: Not Reported 00:12:28.940 Non-Operational Permissive Mode: Not Supported 00:12:28.940 00:12:28.940 Health Information 00:12:28.940 ================== 00:12:28.940 Critical Warnings: 00:12:28.940 Available Spare Space: OK 00:12:28.940 Temperature: OK 00:12:28.940 Device Reliability: OK 00:12:28.940 Read Only: No 00:12:28.940 Volatile Memory Backup: OK 00:12:28.940 Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.940 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:28.940 Available Spare: 0% 00:12:28.940 Available Spare Threshold: 0% 00:12:28.940 Life Percentage Used: 0% 00:12:28.940 Data Units Read: 2332 00:12:28.940 Data Units Written: 2012 00:12:28.940 Host Read Commands: 102682 00:12:28.940 Host Write Commands: 98454 00:12:28.940 Controller Busy Time: 0 minutes 00:12:28.940 Power Cycles: 0 00:12:28.940 Power On Hours: 0 hours 00:12:28.940 Unsafe Shutdowns: 0 00:12:28.940 Unrecoverable Media Errors: 0 00:12:28.940 Lifetime Error Log Entries: 0 00:12:28.940 Warning Temperature Time: 0 minutes 00:12:28.940 Critical Temperature Time: 0 minutes 00:12:28.940 00:12:28.940 Number of Queues 00:12:28.940 ================ 00:12:28.940 Number of I/O Submission Queues: 64 00:12:28.940 Number of I/O Completion Queues: 64 00:12:28.940 00:12:28.940 ZNS Specific Controller Data 00:12:28.940 ============================ 00:12:28.940 Zone Append Size Limit: 0 00:12:28.940 00:12:28.940 00:12:28.940 Active Namespaces 00:12:28.940 ================= 00:12:28.940 Namespace ID:1 00:12:28.940 Error Recovery Timeout: Unlimited 00:12:28.940 Command Set Identifier: NVM (00h) 00:12:28.940 Deallocate: Supported 00:12:28.940 Deallocated/Unwritten Error: Supported 00:12:28.940 Deallocated Read Value: All 0x00 00:12:28.940 Deallocate in Write Zeroes: Not Supported 00:12:28.940 Deallocated Guard Field: 0xFFFF 00:12:28.940 Flush: Supported 00:12:28.940 Reservation: Not Supported 00:12:28.940 Namespace Sharing Capabilities: Private 00:12:28.940 Size (in LBAs): 1048576 (4GiB) 00:12:28.940 Capacity (in LBAs): 1048576 (4GiB) 00:12:28.940 Utilization (in LBAs): 1048576 (4GiB) 00:12:28.940 Thin Provisioning: Not Supported 00:12:28.940 Per-NS Atomic Units: No 00:12:28.940 Maximum Single Source Range Length: 128 00:12:28.940 Maximum Copy Length: 128 00:12:28.940 Maximum Source Range Count: 128 00:12:28.940 NGUID/EUI64 Never Reused: No 00:12:28.940 Namespace Write Protected: No 00:12:28.940 Number of LBA Formats: 8 00:12:28.940 Current LBA Format: LBA Format #04 00:12:28.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.940 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.940 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.940 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.940 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.940 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.940 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.940 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.940 00:12:28.940 NVM Specific Namespace Data 00:12:28.940 =========================== 00:12:28.940 Logical Block Storage Tag Mask: 0 00:12:28.940 Protection Information Capabilities: 00:12:28.940 16b Guard Protection Information Storage Tag Support: No 00:12:28.940 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:28.940 Storage Tag Check Read Support: No 00:12:28.940 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Namespace ID:2 00:12:28.940 Error Recovery Timeout: Unlimited 00:12:28.940 Command Set Identifier: NVM (00h) 00:12:28.940 Deallocate: Supported 00:12:28.940 Deallocated/Unwritten Error: Supported 00:12:28.940 Deallocated Read Value: All 0x00 00:12:28.940 Deallocate in Write Zeroes: Not Supported 00:12:28.940 Deallocated Guard Field: 0xFFFF 00:12:28.940 Flush: Supported 00:12:28.940 Reservation: Not Supported 00:12:28.940 Namespace Sharing Capabilities: Private 00:12:28.940 Size (in LBAs): 1048576 (4GiB) 00:12:28.940 Capacity (in LBAs): 1048576 (4GiB) 00:12:28.940 Utilization (in LBAs): 1048576 (4GiB) 00:12:28.940 Thin Provisioning: Not Supported 00:12:28.940 Per-NS Atomic Units: No 00:12:28.940 Maximum Single Source Range Length: 128 00:12:28.940 Maximum Copy Length: 128 00:12:28.940 Maximum Source Range Count: 128 00:12:28.940 NGUID/EUI64 Never Reused: No 00:12:28.940 Namespace Write Protected: No 00:12:28.940 Number of LBA Formats: 8 00:12:28.940 Current LBA Format: LBA Format #04 00:12:28.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.940 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.940 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.940 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.940 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.940 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.940 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.940 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.940 00:12:28.940 NVM Specific Namespace Data 00:12:28.940 =========================== 00:12:28.940 Logical Block Storage Tag Mask: 0 00:12:28.940 Protection Information Capabilities: 00:12:28.940 16b Guard Protection Information Storage Tag Support: No 00:12:28.940 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:28.940 Storage Tag Check Read Support: No 00:12:28.940 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:28.940 Namespace ID:3 00:12:28.940 Error Recovery Timeout: Unlimited 00:12:28.940 Command Set Identifier: NVM (00h) 00:12:28.940 Deallocate: Supported 00:12:28.940 Deallocated/Unwritten Error: Supported 00:12:28.940 Deallocated Read Value: All 0x00 00:12:28.940 Deallocate in Write Zeroes: Not Supported 00:12:28.940 Deallocated Guard Field: 0xFFFF 00:12:28.940 Flush: Supported 00:12:28.940 Reservation: Not Supported 00:12:28.940 Namespace Sharing Capabilities: Private 00:12:28.940 Size (in LBAs): 1048576 (4GiB) 00:12:28.940 Capacity (in LBAs): 1048576 (4GiB) 00:12:28.940 Utilization (in LBAs): 1048576 (4GiB) 00:12:28.940 Thin Provisioning: Not Supported 00:12:28.940 Per-NS Atomic Units: No 00:12:28.940 Maximum Single Source Range Length: 128 00:12:28.940 Maximum Copy Length: 128 00:12:28.940 Maximum Source Range Count: 128 00:12:28.940 NGUID/EUI64 Never Reused: No 00:12:28.940 Namespace Write Protected: No 00:12:28.940 Number of LBA Formats: 8 00:12:28.940 Current LBA Format: LBA Format #04 00:12:28.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.940 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:28.940 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:28.940 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:28.940 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:28.940 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:28.940 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:28.940 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:28.940 00:12:28.940 NVM Specific Namespace Data 00:12:28.940 =========================== 00:12:28.940 Logical Block Storage Tag Mask: 0 00:12:28.940 Protection Information Capabilities: 00:12:28.940 16b Guard Protection Information Storage Tag Support: No 00:12:28.940 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:29.199 Storage Tag Check Read Support: No 00:12:29.199 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.199 17:03:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:29.199 17:03:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:29.458 ===================================================== 00:12:29.458 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:29.458 ===================================================== 00:12:29.458 Controller Capabilities/Features 00:12:29.458 ================================ 00:12:29.458 Vendor ID: 1b36 00:12:29.458 Subsystem Vendor ID: 1af4 00:12:29.458 Serial Number: 12343 00:12:29.458 Model Number: QEMU NVMe Ctrl 00:12:29.458 Firmware Version: 8.0.0 00:12:29.458 Recommended Arb Burst: 6 00:12:29.458 IEEE OUI Identifier: 00 54 52 00:12:29.458 Multi-path I/O 00:12:29.458 May have multiple subsystem ports: No 00:12:29.458 May have multiple controllers: Yes 00:12:29.458 Associated with SR-IOV VF: No 00:12:29.458 Max Data Transfer Size: 524288 00:12:29.458 Max Number of Namespaces: 256 00:12:29.458 Max Number of I/O Queues: 64 00:12:29.458 NVMe Specification Version (VS): 1.4 00:12:29.458 NVMe Specification Version (Identify): 1.4 00:12:29.458 Maximum Queue Entries: 2048 00:12:29.458 Contiguous Queues Required: Yes 00:12:29.458 Arbitration Mechanisms Supported 00:12:29.458 Weighted Round Robin: Not Supported 00:12:29.458 Vendor Specific: Not Supported 00:12:29.458 Reset Timeout: 7500 ms 00:12:29.458 Doorbell Stride: 4 bytes 00:12:29.458 NVM Subsystem Reset: Not Supported 00:12:29.458 Command Sets Supported 00:12:29.458 NVM Command Set: Supported 00:12:29.458 Boot Partition: Not Supported 00:12:29.458 Memory Page Size Minimum: 4096 bytes 00:12:29.458 Memory Page Size Maximum: 65536 bytes 00:12:29.458 Persistent Memory Region: Not Supported 00:12:29.458 Optional Asynchronous Events Supported 00:12:29.458 Namespace Attribute Notices: Supported 00:12:29.458 Firmware Activation Notices: Not Supported 00:12:29.458 ANA Change Notices: Not Supported 00:12:29.458 PLE Aggregate Log Change Notices: Not Supported 00:12:29.458 LBA Status Info Alert Notices: Not Supported 00:12:29.458 EGE Aggregate Log Change Notices: Not Supported 00:12:29.458 Normal NVM Subsystem Shutdown event: Not Supported 00:12:29.458 Zone Descriptor Change Notices: Not Supported 00:12:29.458 Discovery Log Change Notices: Not Supported 00:12:29.458 Controller Attributes 00:12:29.458 128-bit Host Identifier: Not Supported 00:12:29.458 Non-Operational Permissive Mode: Not Supported 00:12:29.458 NVM Sets: Not Supported 00:12:29.458 Read Recovery Levels: Not Supported 00:12:29.458 Endurance Groups: Supported 00:12:29.458 Predictable Latency Mode: Not Supported 00:12:29.458 Traffic Based Keep ALive: Not Supported 00:12:29.458 Namespace Granularity: Not Supported 00:12:29.458 SQ Associations: Not Supported 00:12:29.458 UUID List: Not Supported 00:12:29.458 Multi-Domain Subsystem: Not Supported 00:12:29.458 Fixed Capacity Management: Not Supported 00:12:29.458 Variable Capacity Management: Not Supported 00:12:29.458 Delete Endurance Group: Not Supported 00:12:29.458 Delete NVM Set: Not Supported 00:12:29.458 Extended LBA Formats Supported: Supported 00:12:29.458 Flexible Data Placement Supported: Supported 00:12:29.458 00:12:29.458 Controller Memory Buffer Support 00:12:29.458 ================================ 00:12:29.458 Supported: No 00:12:29.458 00:12:29.459 Persistent Memory Region Support 00:12:29.459 ================================ 00:12:29.459 Supported: No 00:12:29.459 00:12:29.459 Admin Command Set Attributes 00:12:29.459 ============================ 00:12:29.459 Security Send/Receive: Not Supported 00:12:29.459 Format NVM: Supported 00:12:29.459 Firmware Activate/Download: Not Supported 00:12:29.459 Namespace Management: Supported 00:12:29.459 Device Self-Test: Not Supported 00:12:29.459 Directives: Supported 00:12:29.459 NVMe-MI: Not Supported 00:12:29.459 Virtualization Management: Not Supported 00:12:29.459 Doorbell Buffer Config: Supported 00:12:29.459 Get LBA Status Capability: Not Supported 00:12:29.459 Command & Feature Lockdown Capability: Not Supported 00:12:29.459 Abort Command Limit: 4 00:12:29.459 Async Event Request Limit: 4 00:12:29.459 Number of Firmware Slots: N/A 00:12:29.459 Firmware Slot 1 Read-Only: N/A 00:12:29.459 Firmware Activation Without Reset: N/A 00:12:29.459 Multiple Update Detection Support: N/A 00:12:29.459 Firmware Update Granularity: No Information Provided 00:12:29.459 Per-Namespace SMART Log: Yes 00:12:29.459 Asymmetric Namespace Access Log Page: Not Supported 00:12:29.459 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:29.459 Command Effects Log Page: Supported 00:12:29.459 Get Log Page Extended Data: Supported 00:12:29.459 Telemetry Log Pages: Not Supported 00:12:29.459 Persistent Event Log Pages: Not Supported 00:12:29.459 Supported Log Pages Log Page: May Support 00:12:29.459 Commands Supported & Effects Log Page: Not Supported 00:12:29.459 Feature Identifiers & Effects Log Page:May Support 00:12:29.459 NVMe-MI Commands & Effects Log Page: May Support 00:12:29.459 Data Area 4 for Telemetry Log: Not Supported 00:12:29.459 Error Log Page Entries Supported: 1 00:12:29.459 Keep Alive: Not Supported 00:12:29.459 00:12:29.459 NVM Command Set Attributes 00:12:29.459 ========================== 00:12:29.459 Submission Queue Entry Size 00:12:29.459 Max: 64 00:12:29.459 Min: 64 00:12:29.459 Completion Queue Entry Size 00:12:29.459 Max: 16 00:12:29.459 Min: 16 00:12:29.459 Number of Namespaces: 256 00:12:29.459 Compare Command: Supported 00:12:29.459 Write Uncorrectable Command: Not Supported 00:12:29.459 Dataset Management Command: Supported 00:12:29.459 Write Zeroes Command: Supported 00:12:29.459 Set Features Save Field: Supported 00:12:29.459 Reservations: Not Supported 00:12:29.459 Timestamp: Supported 00:12:29.459 Copy: Supported 00:12:29.459 Volatile Write Cache: Present 00:12:29.459 Atomic Write Unit (Normal): 1 00:12:29.459 Atomic Write Unit (PFail): 1 00:12:29.459 Atomic Compare & Write Unit: 1 00:12:29.459 Fused Compare & Write: Not Supported 00:12:29.459 Scatter-Gather List 00:12:29.459 SGL Command Set: Supported 00:12:29.459 SGL Keyed: Not Supported 00:12:29.459 SGL Bit Bucket Descriptor: Not Supported 00:12:29.459 SGL Metadata Pointer: Not Supported 00:12:29.459 Oversized SGL: Not Supported 00:12:29.459 SGL Metadata Address: Not Supported 00:12:29.459 SGL Offset: Not Supported 00:12:29.459 Transport SGL Data Block: Not Supported 00:12:29.459 Replay Protected Memory Block: Not Supported 00:12:29.459 00:12:29.459 Firmware Slot Information 00:12:29.459 ========================= 00:12:29.459 Active slot: 1 00:12:29.459 Slot 1 Firmware Revision: 1.0 00:12:29.459 00:12:29.459 00:12:29.459 Commands Supported and Effects 00:12:29.459 ============================== 00:12:29.459 Admin Commands 00:12:29.459 -------------- 00:12:29.459 Delete I/O Submission Queue (00h): Supported 00:12:29.459 Create I/O Submission Queue (01h): Supported 00:12:29.459 Get Log Page (02h): Supported 00:12:29.459 Delete I/O Completion Queue (04h): Supported 00:12:29.459 Create I/O Completion Queue (05h): Supported 00:12:29.459 Identify (06h): Supported 00:12:29.459 Abort (08h): Supported 00:12:29.459 Set Features (09h): Supported 00:12:29.459 Get Features (0Ah): Supported 00:12:29.459 Asynchronous Event Request (0Ch): Supported 00:12:29.459 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:29.459 Directive Send (19h): Supported 00:12:29.459 Directive Receive (1Ah): Supported 00:12:29.459 Virtualization Management (1Ch): Supported 00:12:29.459 Doorbell Buffer Config (7Ch): Supported 00:12:29.459 Format NVM (80h): Supported LBA-Change 00:12:29.459 I/O Commands 00:12:29.459 ------------ 00:12:29.459 Flush (00h): Supported LBA-Change 00:12:29.459 Write (01h): Supported LBA-Change 00:12:29.459 Read (02h): Supported 00:12:29.459 Compare (05h): Supported 00:12:29.459 Write Zeroes (08h): Supported LBA-Change 00:12:29.459 Dataset Management (09h): Supported LBA-Change 00:12:29.459 Unknown (0Ch): Supported 00:12:29.459 Unknown (12h): Supported 00:12:29.459 Copy (19h): Supported LBA-Change 00:12:29.459 Unknown (1Dh): Supported LBA-Change 00:12:29.459 00:12:29.459 Error Log 00:12:29.459 ========= 00:12:29.459 00:12:29.459 Arbitration 00:12:29.459 =========== 00:12:29.459 Arbitration Burst: no limit 00:12:29.459 00:12:29.459 Power Management 00:12:29.459 ================ 00:12:29.459 Number of Power States: 1 00:12:29.459 Current Power State: Power State #0 00:12:29.459 Power State #0: 00:12:29.459 Max Power: 25.00 W 00:12:29.459 Non-Operational State: Operational 00:12:29.459 Entry Latency: 16 microseconds 00:12:29.459 Exit Latency: 4 microseconds 00:12:29.459 Relative Read Throughput: 0 00:12:29.459 Relative Read Latency: 0 00:12:29.459 Relative Write Throughput: 0 00:12:29.459 Relative Write Latency: 0 00:12:29.459 Idle Power: Not Reported 00:12:29.459 Active Power: Not Reported 00:12:29.459 Non-Operational Permissive Mode: Not Supported 00:12:29.459 00:12:29.459 Health Information 00:12:29.459 ================== 00:12:29.459 Critical Warnings: 00:12:29.459 Available Spare Space: OK 00:12:29.459 Temperature: OK 00:12:29.459 Device Reliability: OK 00:12:29.459 Read Only: No 00:12:29.459 Volatile Memory Backup: OK 00:12:29.459 Current Temperature: 323 Kelvin (50 Celsius) 00:12:29.459 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:29.459 Available Spare: 0% 00:12:29.459 Available Spare Threshold: 0% 00:12:29.459 Life Percentage Used: 0% 00:12:29.459 Data Units Read: 866 00:12:29.459 Data Units Written: 760 00:12:29.459 Host Read Commands: 34941 00:12:29.459 Host Write Commands: 33531 00:12:29.459 Controller Busy Time: 0 minutes 00:12:29.459 Power Cycles: 0 00:12:29.459 Power On Hours: 0 hours 00:12:29.459 Unsafe Shutdowns: 0 00:12:29.459 Unrecoverable Media Errors: 0 00:12:29.459 Lifetime Error Log Entries: 0 00:12:29.459 Warning Temperature Time: 0 minutes 00:12:29.459 Critical Temperature Time: 0 minutes 00:12:29.459 00:12:29.459 Number of Queues 00:12:29.459 ================ 00:12:29.459 Number of I/O Submission Queues: 64 00:12:29.459 Number of I/O Completion Queues: 64 00:12:29.459 00:12:29.459 ZNS Specific Controller Data 00:12:29.459 ============================ 00:12:29.459 Zone Append Size Limit: 0 00:12:29.459 00:12:29.459 00:12:29.459 Active Namespaces 00:12:29.459 ================= 00:12:29.459 Namespace ID:1 00:12:29.459 Error Recovery Timeout: Unlimited 00:12:29.459 Command Set Identifier: NVM (00h) 00:12:29.459 Deallocate: Supported 00:12:29.459 Deallocated/Unwritten Error: Supported 00:12:29.459 Deallocated Read Value: All 0x00 00:12:29.459 Deallocate in Write Zeroes: Not Supported 00:12:29.459 Deallocated Guard Field: 0xFFFF 00:12:29.459 Flush: Supported 00:12:29.459 Reservation: Not Supported 00:12:29.459 Namespace Sharing Capabilities: Multiple Controllers 00:12:29.459 Size (in LBAs): 262144 (1GiB) 00:12:29.459 Capacity (in LBAs): 262144 (1GiB) 00:12:29.459 Utilization (in LBAs): 262144 (1GiB) 00:12:29.459 Thin Provisioning: Not Supported 00:12:29.459 Per-NS Atomic Units: No 00:12:29.459 Maximum Single Source Range Length: 128 00:12:29.459 Maximum Copy Length: 128 00:12:29.459 Maximum Source Range Count: 128 00:12:29.459 NGUID/EUI64 Never Reused: No 00:12:29.459 Namespace Write Protected: No 00:12:29.459 Endurance group ID: 1 00:12:29.459 Number of LBA Formats: 8 00:12:29.459 Current LBA Format: LBA Format #04 00:12:29.459 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:29.459 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:29.459 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:29.459 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:29.459 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:29.459 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:29.460 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:29.460 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:29.460 00:12:29.460 Get Feature FDP: 00:12:29.460 ================ 00:12:29.460 Enabled: Yes 00:12:29.460 FDP configuration index: 0 00:12:29.460 00:12:29.460 FDP configurations log page 00:12:29.460 =========================== 00:12:29.460 Number of FDP configurations: 1 00:12:29.460 Version: 0 00:12:29.460 Size: 112 00:12:29.460 FDP Configuration Descriptor: 0 00:12:29.460 Descriptor Size: 96 00:12:29.460 Reclaim Group Identifier format: 2 00:12:29.460 FDP Volatile Write Cache: Not Present 00:12:29.460 FDP Configuration: Valid 00:12:29.460 Vendor Specific Size: 0 00:12:29.460 Number of Reclaim Groups: 2 00:12:29.460 Number of Recalim Unit Handles: 8 00:12:29.460 Max Placement Identifiers: 128 00:12:29.460 Number of Namespaces Suppprted: 256 00:12:29.460 Reclaim unit Nominal Size: 6000000 bytes 00:12:29.460 Estimated Reclaim Unit Time Limit: Not Reported 00:12:29.460 RUH Desc #000: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #001: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #002: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #003: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #004: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #005: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #006: RUH Type: Initially Isolated 00:12:29.460 RUH Desc #007: RUH Type: Initially Isolated 00:12:29.460 00:12:29.460 FDP reclaim unit handle usage log page 00:12:29.460 ====================================== 00:12:29.460 Number of Reclaim Unit Handles: 8 00:12:29.460 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:29.460 RUH Usage Desc #001: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #002: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #003: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #004: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #005: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #006: RUH Attributes: Unused 00:12:29.460 RUH Usage Desc #007: RUH Attributes: Unused 00:12:29.460 00:12:29.460 FDP statistics log page 00:12:29.460 ======================= 00:12:29.460 Host bytes with metadata written: 471113728 00:12:29.460 Media bytes with metadata written: 471179264 00:12:29.460 Media bytes erased: 0 00:12:29.460 00:12:29.460 FDP events log page 00:12:29.460 =================== 00:12:29.460 Number of FDP events: 0 00:12:29.460 00:12:29.460 NVM Specific Namespace Data 00:12:29.460 =========================== 00:12:29.460 Logical Block Storage Tag Mask: 0 00:12:29.460 Protection Information Capabilities: 00:12:29.460 16b Guard Protection Information Storage Tag Support: No 00:12:29.460 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:29.460 Storage Tag Check Read Support: No 00:12:29.460 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:29.460 00:12:29.460 real 0m1.824s 00:12:29.460 user 0m0.716s 00:12:29.460 sys 0m0.884s 00:12:29.460 17:03:21 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.460 17:03:21 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:29.460 ************************************ 00:12:29.460 END TEST nvme_identify 00:12:29.460 ************************************ 00:12:29.460 17:03:21 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:29.460 17:03:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:29.460 17:03:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.460 17:03:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.460 ************************************ 00:12:29.460 START TEST nvme_perf 00:12:29.460 ************************************ 00:12:29.460 17:03:21 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:12:29.460 17:03:21 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:30.838 Initializing NVMe Controllers 00:12:30.838 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:30.838 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:30.838 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:30.838 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:30.838 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:30.838 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:30.838 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:30.838 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:30.838 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:30.838 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:30.838 Initialization complete. Launching workers. 00:12:30.838 ======================================================== 00:12:30.838 Latency(us) 00:12:30.838 Device Information : IOPS MiB/s Average min max 00:12:30.838 PCIE (0000:00:10.0) NSID 1 from core 0: 12084.24 141.61 10607.91 8173.34 44466.28 00:12:30.838 PCIE (0000:00:11.0) NSID 1 from core 0: 12084.24 141.61 10580.37 8261.59 41500.73 00:12:30.838 PCIE (0000:00:13.0) NSID 1 from core 0: 12084.24 141.61 10550.27 8294.53 39451.81 00:12:30.838 PCIE (0000:00:12.0) NSID 1 from core 0: 12084.24 141.61 10519.84 8330.54 36584.84 00:12:30.838 PCIE (0000:00:12.0) NSID 2 from core 0: 12084.24 141.61 10489.49 8328.45 33723.50 00:12:30.838 PCIE (0000:00:12.0) NSID 3 from core 0: 12084.24 141.61 10459.17 8299.49 30791.44 00:12:30.838 ======================================================== 00:12:30.838 Total : 72505.45 849.67 10534.51 8173.34 44466.28 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8519.680us 00:12:30.838 10.00000% : 9294.196us 00:12:30.838 25.00000% : 9711.244us 00:12:30.838 50.00000% : 10187.869us 00:12:30.838 75.00000% : 10724.073us 00:12:30.838 90.00000% : 11498.589us 00:12:30.838 95.00000% : 12809.309us 00:12:30.838 98.00000% : 14417.920us 00:12:30.838 99.00000% : 32172.218us 00:12:30.838 99.50000% : 41943.040us 00:12:30.838 99.90000% : 44087.855us 00:12:30.838 99.99000% : 44564.480us 00:12:30.838 99.99900% : 44564.480us 00:12:30.838 99.99990% : 44564.480us 00:12:30.838 99.99999% : 44564.480us 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8638.836us 00:12:30.838 10.00000% : 9353.775us 00:12:30.838 25.00000% : 9770.822us 00:12:30.838 50.00000% : 10247.447us 00:12:30.838 75.00000% : 10724.073us 00:12:30.838 90.00000% : 11379.433us 00:12:30.838 95.00000% : 12868.887us 00:12:30.838 98.00000% : 14120.029us 00:12:30.838 99.00000% : 30027.404us 00:12:30.838 99.50000% : 39083.287us 00:12:30.838 99.90000% : 41228.102us 00:12:30.838 99.99000% : 41466.415us 00:12:30.838 99.99900% : 41704.727us 00:12:30.838 99.99990% : 41704.727us 00:12:30.838 99.99999% : 41704.727us 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8638.836us 00:12:30.838 10.00000% : 9353.775us 00:12:30.838 25.00000% : 9770.822us 00:12:30.838 50.00000% : 10247.447us 00:12:30.838 75.00000% : 10664.495us 00:12:30.838 90.00000% : 11379.433us 00:12:30.838 95.00000% : 12868.887us 00:12:30.838 98.00000% : 14000.873us 00:12:30.838 99.00000% : 27882.589us 00:12:30.838 99.50000% : 36938.473us 00:12:30.838 99.90000% : 39083.287us 00:12:30.838 99.99000% : 39559.913us 00:12:30.838 99.99900% : 39559.913us 00:12:30.838 99.99990% : 39559.913us 00:12:30.838 99.99999% : 39559.913us 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8638.836us 00:12:30.838 10.00000% : 9353.775us 00:12:30.838 25.00000% : 9770.822us 00:12:30.838 50.00000% : 10247.447us 00:12:30.838 75.00000% : 10724.073us 00:12:30.838 90.00000% : 11379.433us 00:12:30.838 95.00000% : 12630.575us 00:12:30.838 98.00000% : 14060.451us 00:12:30.838 99.00000% : 25141.993us 00:12:30.838 99.50000% : 34078.720us 00:12:30.838 99.90000% : 36223.535us 00:12:30.838 99.99000% : 36700.160us 00:12:30.838 99.99900% : 36700.160us 00:12:30.838 99.99990% : 36700.160us 00:12:30.838 99.99999% : 36700.160us 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8638.836us 00:12:30.838 10.00000% : 9353.775us 00:12:30.838 25.00000% : 9770.822us 00:12:30.838 50.00000% : 10247.447us 00:12:30.838 75.00000% : 10664.495us 00:12:30.838 90.00000% : 11379.433us 00:12:30.838 95.00000% : 12511.418us 00:12:30.838 98.00000% : 14120.029us 00:12:30.838 99.00000% : 22401.396us 00:12:30.838 99.50000% : 31218.967us 00:12:30.838 99.90000% : 33363.782us 00:12:30.838 99.99000% : 33840.407us 00:12:30.838 99.99900% : 33840.407us 00:12:30.838 99.99990% : 33840.407us 00:12:30.838 99.99999% : 33840.407us 00:12:30.838 00:12:30.838 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:30.838 ================================================================================= 00:12:30.838 1.00000% : 8579.258us 00:12:30.838 10.00000% : 9353.775us 00:12:30.838 25.00000% : 9770.822us 00:12:30.838 50.00000% : 10247.447us 00:12:30.838 75.00000% : 10724.073us 00:12:30.838 90.00000% : 11379.433us 00:12:30.838 95.00000% : 12570.996us 00:12:30.838 98.00000% : 14179.607us 00:12:30.838 99.00000% : 19779.956us 00:12:30.838 99.50000% : 28240.058us 00:12:30.838 99.90000% : 30384.873us 00:12:30.838 99.99000% : 30980.655us 00:12:30.838 99.99900% : 30980.655us 00:12:30.838 99.99990% : 30980.655us 00:12:30.838 99.99999% : 30980.655us 00:12:30.838 00:12:30.838 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:30.838 ============================================================================== 00:12:30.839 Range in us Cumulative IO count 00:12:30.839 8162.211 - 8221.789: 0.0579% ( 7) 00:12:30.839 8221.789 - 8281.367: 0.1405% ( 10) 00:12:30.839 8281.367 - 8340.945: 0.2397% ( 12) 00:12:30.839 8340.945 - 8400.524: 0.4960% ( 31) 00:12:30.839 8400.524 - 8460.102: 0.8267% ( 40) 00:12:30.839 8460.102 - 8519.680: 1.2318% ( 49) 00:12:30.839 8519.680 - 8579.258: 1.6700% ( 53) 00:12:30.839 8579.258 - 8638.836: 2.1825% ( 62) 00:12:30.839 8638.836 - 8698.415: 2.6620% ( 58) 00:12:30.839 8698.415 - 8757.993: 3.2903% ( 76) 00:12:30.839 8757.993 - 8817.571: 3.9104% ( 75) 00:12:30.839 8817.571 - 8877.149: 4.5552% ( 78) 00:12:30.839 8877.149 - 8936.727: 5.0926% ( 65) 00:12:30.839 8936.727 - 8996.305: 5.6878% ( 72) 00:12:30.839 8996.305 - 9055.884: 6.3161% ( 76) 00:12:30.839 9055.884 - 9115.462: 7.1181% ( 97) 00:12:30.839 9115.462 - 9175.040: 7.8952% ( 94) 00:12:30.839 9175.040 - 9234.618: 8.8790% ( 119) 00:12:30.839 9234.618 - 9294.196: 10.0281% ( 139) 00:12:30.839 9294.196 - 9353.775: 11.3013% ( 154) 00:12:30.839 9353.775 - 9413.353: 13.0043% ( 206) 00:12:30.839 9413.353 - 9472.931: 14.9554% ( 236) 00:12:30.839 9472.931 - 9532.509: 17.1462% ( 265) 00:12:30.839 9532.509 - 9592.087: 19.5437% ( 290) 00:12:30.839 9592.087 - 9651.665: 22.2470% ( 327) 00:12:30.839 9651.665 - 9711.244: 25.0661% ( 341) 00:12:30.839 9711.244 - 9770.822: 27.8274% ( 334) 00:12:30.839 9770.822 - 9830.400: 31.0103% ( 385) 00:12:30.839 9830.400 - 9889.978: 34.1601% ( 381) 00:12:30.839 9889.978 - 9949.556: 37.2272% ( 371) 00:12:30.839 9949.556 - 10009.135: 40.4514% ( 390) 00:12:30.839 10009.135 - 10068.713: 43.8409% ( 410) 00:12:30.839 10068.713 - 10128.291: 47.1147% ( 396) 00:12:30.839 10128.291 - 10187.869: 50.3224% ( 388) 00:12:30.839 10187.869 - 10247.447: 53.6376% ( 401) 00:12:30.839 10247.447 - 10307.025: 56.8370% ( 387) 00:12:30.839 10307.025 - 10366.604: 59.9950% ( 382) 00:12:30.839 10366.604 - 10426.182: 63.1118% ( 377) 00:12:30.839 10426.182 - 10485.760: 65.9144% ( 339) 00:12:30.839 10485.760 - 10545.338: 68.6508% ( 331) 00:12:30.839 10545.338 - 10604.916: 71.0731% ( 293) 00:12:30.839 10604.916 - 10664.495: 73.5119% ( 295) 00:12:30.839 10664.495 - 10724.073: 75.5539% ( 247) 00:12:30.839 10724.073 - 10783.651: 77.3892% ( 222) 00:12:30.839 10783.651 - 10843.229: 79.2411% ( 224) 00:12:30.839 10843.229 - 10902.807: 80.7292% ( 180) 00:12:30.839 10902.807 - 10962.385: 82.0437% ( 159) 00:12:30.839 10962.385 - 11021.964: 83.3085% ( 153) 00:12:30.839 11021.964 - 11081.542: 84.3915% ( 131) 00:12:30.839 11081.542 - 11141.120: 85.5076% ( 135) 00:12:30.839 11141.120 - 11200.698: 86.4253% ( 111) 00:12:30.839 11200.698 - 11260.276: 87.3347% ( 110) 00:12:30.839 11260.276 - 11319.855: 88.2771% ( 114) 00:12:30.839 11319.855 - 11379.433: 89.0625% ( 95) 00:12:30.839 11379.433 - 11439.011: 89.9306% ( 105) 00:12:30.839 11439.011 - 11498.589: 90.5919% ( 80) 00:12:30.839 11498.589 - 11558.167: 91.1789% ( 71) 00:12:30.839 11558.167 - 11617.745: 91.6749% ( 60) 00:12:30.839 11617.745 - 11677.324: 91.9808% ( 37) 00:12:30.839 11677.324 - 11736.902: 92.2206% ( 29) 00:12:30.839 11736.902 - 11796.480: 92.4272% ( 25) 00:12:30.839 11796.480 - 11856.058: 92.6339% ( 25) 00:12:30.839 11856.058 - 11915.636: 92.8489% ( 26) 00:12:30.839 11915.636 - 11975.215: 93.0473% ( 24) 00:12:30.839 11975.215 - 12034.793: 93.2044% ( 19) 00:12:30.839 12034.793 - 12094.371: 93.3366% ( 16) 00:12:30.839 12094.371 - 12153.949: 93.5103% ( 21) 00:12:30.839 12153.949 - 12213.527: 93.6921% ( 22) 00:12:30.839 12213.527 - 12273.105: 93.8575% ( 20) 00:12:30.839 12273.105 - 12332.684: 93.9897% ( 16) 00:12:30.839 12332.684 - 12392.262: 94.1716% ( 22) 00:12:30.839 12392.262 - 12451.840: 94.2956% ( 15) 00:12:30.839 12451.840 - 12511.418: 94.4362% ( 17) 00:12:30.839 12511.418 - 12570.996: 94.5685% ( 16) 00:12:30.839 12570.996 - 12630.575: 94.7090% ( 17) 00:12:30.839 12630.575 - 12690.153: 94.8495% ( 17) 00:12:30.839 12690.153 - 12749.731: 94.9570% ( 13) 00:12:30.839 12749.731 - 12809.309: 95.0728% ( 14) 00:12:30.839 12809.309 - 12868.887: 95.2216% ( 18) 00:12:30.839 12868.887 - 12928.465: 95.3786% ( 19) 00:12:30.839 12928.465 - 12988.044: 95.5192% ( 17) 00:12:30.839 12988.044 - 13047.622: 95.6928% ( 21) 00:12:30.839 13047.622 - 13107.200: 95.8581% ( 20) 00:12:30.839 13107.200 - 13166.778: 96.0400% ( 22) 00:12:30.839 13166.778 - 13226.356: 96.1475% ( 13) 00:12:30.839 13226.356 - 13285.935: 96.2715% ( 15) 00:12:30.839 13285.935 - 13345.513: 96.3790% ( 13) 00:12:30.839 13345.513 - 13405.091: 96.5030% ( 15) 00:12:30.839 13405.091 - 13464.669: 96.6601% ( 19) 00:12:30.839 13464.669 - 13524.247: 96.7758% ( 14) 00:12:30.839 13524.247 - 13583.825: 96.8750% ( 12) 00:12:30.839 13583.825 - 13643.404: 96.9825% ( 13) 00:12:30.839 13643.404 - 13702.982: 97.0486% ( 8) 00:12:30.839 13702.982 - 13762.560: 97.1396% ( 11) 00:12:30.839 13762.560 - 13822.138: 97.2305% ( 11) 00:12:30.839 13822.138 - 13881.716: 97.3214% ( 11) 00:12:30.839 13881.716 - 13941.295: 97.4041% ( 10) 00:12:30.839 13941.295 - 14000.873: 97.4950% ( 11) 00:12:30.839 14000.873 - 14060.451: 97.5446% ( 6) 00:12:30.839 14060.451 - 14120.029: 97.6108% ( 8) 00:12:30.839 14120.029 - 14179.607: 97.7100% ( 12) 00:12:30.839 14179.607 - 14239.185: 97.7844% ( 9) 00:12:30.839 14239.185 - 14298.764: 97.8423% ( 7) 00:12:30.839 14298.764 - 14358.342: 97.9167% ( 9) 00:12:30.839 14358.342 - 14417.920: 98.0159% ( 12) 00:12:30.839 14417.920 - 14477.498: 98.0903% ( 9) 00:12:30.839 14477.498 - 14537.076: 98.1399% ( 6) 00:12:30.839 14537.076 - 14596.655: 98.1895% ( 6) 00:12:30.839 14596.655 - 14656.233: 98.2639% ( 9) 00:12:30.839 14656.233 - 14715.811: 98.3300% ( 8) 00:12:30.839 14715.811 - 14775.389: 98.3962% ( 8) 00:12:30.839 14775.389 - 14834.967: 98.4540% ( 7) 00:12:30.839 14834.967 - 14894.545: 98.4954% ( 5) 00:12:30.839 14894.545 - 14954.124: 98.5367% ( 5) 00:12:30.839 14954.124 - 15013.702: 98.5946% ( 7) 00:12:30.839 15013.702 - 15073.280: 98.6276% ( 4) 00:12:30.839 15073.280 - 15132.858: 98.6524% ( 3) 00:12:30.839 15132.858 - 15192.436: 98.7021% ( 6) 00:12:30.839 15192.436 - 15252.015: 98.7351% ( 4) 00:12:30.839 15252.015 - 15371.171: 98.8261% ( 11) 00:12:30.839 15371.171 - 15490.327: 98.9005% ( 9) 00:12:30.839 15490.327 - 15609.484: 98.9418% ( 5) 00:12:30.839 31695.593 - 31933.905: 98.9831% ( 5) 00:12:30.839 31933.905 - 32172.218: 99.0327% ( 6) 00:12:30.839 32172.218 - 32410.531: 99.0741% ( 5) 00:12:30.839 32410.531 - 32648.844: 99.1237% ( 6) 00:12:30.839 32648.844 - 32887.156: 99.1650% ( 5) 00:12:30.839 32887.156 - 33125.469: 99.2146% ( 6) 00:12:30.839 33125.469 - 33363.782: 99.2725% ( 7) 00:12:30.839 33363.782 - 33602.095: 99.3221% ( 6) 00:12:30.839 33602.095 - 33840.407: 99.3552% ( 4) 00:12:30.839 33840.407 - 34078.720: 99.4048% ( 6) 00:12:30.839 34078.720 - 34317.033: 99.4544% ( 6) 00:12:30.839 34317.033 - 34555.345: 99.4709% ( 2) 00:12:30.839 41466.415 - 41704.727: 99.4957% ( 3) 00:12:30.839 41704.727 - 41943.040: 99.5453% ( 6) 00:12:30.839 41943.040 - 42181.353: 99.5784% ( 4) 00:12:30.839 42181.353 - 42419.665: 99.6197% ( 5) 00:12:30.839 42419.665 - 42657.978: 99.6610% ( 5) 00:12:30.839 42657.978 - 42896.291: 99.7106% ( 6) 00:12:30.839 42896.291 - 43134.604: 99.7520% ( 5) 00:12:30.839 43134.604 - 43372.916: 99.8016% ( 6) 00:12:30.839 43372.916 - 43611.229: 99.8429% ( 5) 00:12:30.839 43611.229 - 43849.542: 99.8925% ( 6) 00:12:30.839 43849.542 - 44087.855: 99.9339% ( 5) 00:12:30.839 44087.855 - 44326.167: 99.9835% ( 6) 00:12:30.839 44326.167 - 44564.480: 100.0000% ( 2) 00:12:30.839 00:12:30.839 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:30.839 ============================================================================== 00:12:30.839 Range in us Cumulative IO count 00:12:30.839 8221.789 - 8281.367: 0.0165% ( 2) 00:12:30.839 8281.367 - 8340.945: 0.0992% ( 10) 00:12:30.839 8340.945 - 8400.524: 0.1736% ( 9) 00:12:30.839 8400.524 - 8460.102: 0.3224% ( 18) 00:12:30.839 8460.102 - 8519.680: 0.5622% ( 29) 00:12:30.839 8519.680 - 8579.258: 0.9094% ( 42) 00:12:30.839 8579.258 - 8638.836: 1.3889% ( 58) 00:12:30.839 8638.836 - 8698.415: 1.8684% ( 58) 00:12:30.839 8698.415 - 8757.993: 2.3644% ( 60) 00:12:30.839 8757.993 - 8817.571: 3.0010% ( 77) 00:12:30.839 8817.571 - 8877.149: 3.6954% ( 84) 00:12:30.839 8877.149 - 8936.727: 4.4643% ( 93) 00:12:30.839 8936.727 - 8996.305: 5.2166% ( 91) 00:12:30.839 8996.305 - 9055.884: 5.9441% ( 88) 00:12:30.839 9055.884 - 9115.462: 6.7047% ( 92) 00:12:30.839 9115.462 - 9175.040: 7.4487% ( 90) 00:12:30.839 9175.040 - 9234.618: 8.3251% ( 106) 00:12:30.839 9234.618 - 9294.196: 9.3915% ( 129) 00:12:30.839 9294.196 - 9353.775: 10.6151% ( 148) 00:12:30.839 9353.775 - 9413.353: 11.9874% ( 166) 00:12:30.839 9413.353 - 9472.931: 13.6326% ( 199) 00:12:30.839 9472.931 - 9532.509: 15.4679% ( 222) 00:12:30.839 9532.509 - 9592.087: 17.8323% ( 286) 00:12:30.839 9592.087 - 9651.665: 20.4530% ( 317) 00:12:30.839 9651.665 - 9711.244: 23.2722% ( 341) 00:12:30.839 9711.244 - 9770.822: 26.2401% ( 359) 00:12:30.839 9770.822 - 9830.400: 29.2576% ( 365) 00:12:30.839 9830.400 - 9889.978: 32.3826% ( 378) 00:12:30.839 9889.978 - 9949.556: 35.6729% ( 398) 00:12:30.839 9949.556 - 10009.135: 38.9881% ( 401) 00:12:30.839 10009.135 - 10068.713: 42.4272% ( 416) 00:12:30.840 10068.713 - 10128.291: 46.0152% ( 434) 00:12:30.840 10128.291 - 10187.869: 49.7851% ( 456) 00:12:30.840 10187.869 - 10247.447: 53.4144% ( 439) 00:12:30.840 10247.447 - 10307.025: 56.9858% ( 432) 00:12:30.840 10307.025 - 10366.604: 60.6068% ( 438) 00:12:30.840 10366.604 - 10426.182: 63.9054% ( 399) 00:12:30.840 10426.182 - 10485.760: 67.0883% ( 385) 00:12:30.840 10485.760 - 10545.338: 69.7917% ( 327) 00:12:30.840 10545.338 - 10604.916: 72.4041% ( 316) 00:12:30.840 10604.916 - 10664.495: 74.7933% ( 289) 00:12:30.840 10664.495 - 10724.073: 76.8188% ( 245) 00:12:30.840 10724.073 - 10783.651: 78.6872% ( 226) 00:12:30.840 10783.651 - 10843.229: 80.3571% ( 202) 00:12:30.840 10843.229 - 10902.807: 81.9196% ( 189) 00:12:30.840 10902.807 - 10962.385: 83.2259% ( 158) 00:12:30.840 10962.385 - 11021.964: 84.4825% ( 152) 00:12:30.840 11021.964 - 11081.542: 85.7143% ( 149) 00:12:30.840 11081.542 - 11141.120: 86.7973% ( 131) 00:12:30.840 11141.120 - 11200.698: 87.8059% ( 122) 00:12:30.840 11200.698 - 11260.276: 88.8145% ( 122) 00:12:30.840 11260.276 - 11319.855: 89.6577% ( 102) 00:12:30.840 11319.855 - 11379.433: 90.4349% ( 94) 00:12:30.840 11379.433 - 11439.011: 91.1045% ( 81) 00:12:30.840 11439.011 - 11498.589: 91.5923% ( 59) 00:12:30.840 11498.589 - 11558.167: 91.8568% ( 32) 00:12:30.840 11558.167 - 11617.745: 92.0304% ( 21) 00:12:30.840 11617.745 - 11677.324: 92.1792% ( 18) 00:12:30.840 11677.324 - 11736.902: 92.3611% ( 22) 00:12:30.840 11736.902 - 11796.480: 92.5513% ( 23) 00:12:30.840 11796.480 - 11856.058: 92.7166% ( 20) 00:12:30.840 11856.058 - 11915.636: 92.8571% ( 17) 00:12:30.840 11915.636 - 11975.215: 92.9729% ( 14) 00:12:30.840 11975.215 - 12034.793: 93.0721% ( 12) 00:12:30.840 12034.793 - 12094.371: 93.2126% ( 17) 00:12:30.840 12094.371 - 12153.949: 93.3284% ( 14) 00:12:30.840 12153.949 - 12213.527: 93.4441% ( 14) 00:12:30.840 12213.527 - 12273.105: 93.5764% ( 16) 00:12:30.840 12273.105 - 12332.684: 93.6839% ( 13) 00:12:30.840 12332.684 - 12392.262: 93.7831% ( 12) 00:12:30.840 12392.262 - 12451.840: 93.9071% ( 15) 00:12:30.840 12451.840 - 12511.418: 94.0228% ( 14) 00:12:30.840 12511.418 - 12570.996: 94.2047% ( 22) 00:12:30.840 12570.996 - 12630.575: 94.3618% ( 19) 00:12:30.840 12630.575 - 12690.153: 94.5271% ( 20) 00:12:30.840 12690.153 - 12749.731: 94.7007% ( 21) 00:12:30.840 12749.731 - 12809.309: 94.9157% ( 26) 00:12:30.840 12809.309 - 12868.887: 95.1141% ( 24) 00:12:30.840 12868.887 - 12928.465: 95.3290% ( 26) 00:12:30.840 12928.465 - 12988.044: 95.5192% ( 23) 00:12:30.840 12988.044 - 13047.622: 95.6845% ( 20) 00:12:30.840 13047.622 - 13107.200: 95.8581% ( 21) 00:12:30.840 13107.200 - 13166.778: 96.0152% ( 19) 00:12:30.840 13166.778 - 13226.356: 96.2054% ( 23) 00:12:30.840 13226.356 - 13285.935: 96.4038% ( 24) 00:12:30.840 13285.935 - 13345.513: 96.5856% ( 22) 00:12:30.840 13345.513 - 13405.091: 96.7593% ( 21) 00:12:30.840 13405.091 - 13464.669: 96.9494% ( 23) 00:12:30.840 13464.669 - 13524.247: 97.1396% ( 23) 00:12:30.840 13524.247 - 13583.825: 97.3132% ( 21) 00:12:30.840 13583.825 - 13643.404: 97.4289% ( 14) 00:12:30.840 13643.404 - 13702.982: 97.5116% ( 10) 00:12:30.840 13702.982 - 13762.560: 97.6025% ( 11) 00:12:30.840 13762.560 - 13822.138: 97.6687% ( 8) 00:12:30.840 13822.138 - 13881.716: 97.7431% ( 9) 00:12:30.840 13881.716 - 13941.295: 97.8092% ( 8) 00:12:30.840 13941.295 - 14000.873: 97.9084% ( 12) 00:12:30.840 14000.873 - 14060.451: 97.9993% ( 11) 00:12:30.840 14060.451 - 14120.029: 98.0655% ( 8) 00:12:30.840 14120.029 - 14179.607: 98.1564% ( 11) 00:12:30.840 14179.607 - 14239.185: 98.2226% ( 8) 00:12:30.840 14239.185 - 14298.764: 98.2970% ( 9) 00:12:30.840 14298.764 - 14358.342: 98.3631% ( 8) 00:12:30.840 14358.342 - 14417.920: 98.4375% ( 9) 00:12:30.840 14417.920 - 14477.498: 98.5119% ( 9) 00:12:30.840 14477.498 - 14537.076: 98.5615% ( 6) 00:12:30.840 14537.076 - 14596.655: 98.6194% ( 7) 00:12:30.840 14596.655 - 14656.233: 98.6690% ( 6) 00:12:30.840 14656.233 - 14715.811: 98.7186% ( 6) 00:12:30.840 14715.811 - 14775.389: 98.7599% ( 5) 00:12:30.840 14775.389 - 14834.967: 98.7847% ( 3) 00:12:30.840 14834.967 - 14894.545: 98.8095% ( 3) 00:12:30.840 14894.545 - 14954.124: 98.8343% ( 3) 00:12:30.840 14954.124 - 15013.702: 98.8674% ( 4) 00:12:30.840 15013.702 - 15073.280: 98.8922% ( 3) 00:12:30.840 15073.280 - 15132.858: 98.9170% ( 3) 00:12:30.840 15132.858 - 15192.436: 98.9418% ( 3) 00:12:30.840 29550.778 - 29669.935: 98.9501% ( 1) 00:12:30.840 29669.935 - 29789.091: 98.9666% ( 2) 00:12:30.840 29789.091 - 29908.247: 98.9914% ( 3) 00:12:30.840 29908.247 - 30027.404: 99.0162% ( 3) 00:12:30.840 30027.404 - 30146.560: 99.0410% ( 3) 00:12:30.840 30146.560 - 30265.716: 99.0658% ( 3) 00:12:30.840 30265.716 - 30384.873: 99.0906% ( 3) 00:12:30.840 30384.873 - 30504.029: 99.1071% ( 2) 00:12:30.840 30504.029 - 30742.342: 99.1650% ( 7) 00:12:30.840 30742.342 - 30980.655: 99.2146% ( 6) 00:12:30.840 30980.655 - 31218.967: 99.2642% ( 6) 00:12:30.840 31218.967 - 31457.280: 99.3138% ( 6) 00:12:30.840 31457.280 - 31695.593: 99.3634% ( 6) 00:12:30.840 31695.593 - 31933.905: 99.4213% ( 7) 00:12:30.840 31933.905 - 32172.218: 99.4709% ( 6) 00:12:30.840 38844.975 - 39083.287: 99.5205% ( 6) 00:12:30.840 39083.287 - 39321.600: 99.5618% ( 5) 00:12:30.840 39321.600 - 39559.913: 99.6032% ( 5) 00:12:30.840 39559.913 - 39798.225: 99.6528% ( 6) 00:12:30.840 39798.225 - 40036.538: 99.6941% ( 5) 00:12:30.840 40036.538 - 40274.851: 99.7437% ( 6) 00:12:30.840 40274.851 - 40513.164: 99.8016% ( 7) 00:12:30.840 40513.164 - 40751.476: 99.8429% ( 5) 00:12:30.840 40751.476 - 40989.789: 99.8925% ( 6) 00:12:30.840 40989.789 - 41228.102: 99.9421% ( 6) 00:12:30.840 41228.102 - 41466.415: 99.9917% ( 6) 00:12:30.840 41466.415 - 41704.727: 100.0000% ( 1) 00:12:30.840 00:12:30.840 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:30.840 ============================================================================== 00:12:30.840 Range in us Cumulative IO count 00:12:30.840 8281.367 - 8340.945: 0.0579% ( 7) 00:12:30.840 8340.945 - 8400.524: 0.1571% ( 12) 00:12:30.840 8400.524 - 8460.102: 0.2976% ( 17) 00:12:30.840 8460.102 - 8519.680: 0.5043% ( 25) 00:12:30.840 8519.680 - 8579.258: 0.8681% ( 44) 00:12:30.840 8579.258 - 8638.836: 1.3145% ( 54) 00:12:30.840 8638.836 - 8698.415: 1.8684% ( 67) 00:12:30.840 8698.415 - 8757.993: 2.4719% ( 73) 00:12:30.840 8757.993 - 8817.571: 3.1167% ( 78) 00:12:30.840 8817.571 - 8877.149: 3.8194% ( 85) 00:12:30.840 8877.149 - 8936.727: 4.5552% ( 89) 00:12:30.840 8936.727 - 8996.305: 5.2414% ( 83) 00:12:30.840 8996.305 - 9055.884: 5.9441% ( 85) 00:12:30.840 9055.884 - 9115.462: 6.6716% ( 88) 00:12:30.840 9115.462 - 9175.040: 7.4157% ( 90) 00:12:30.840 9175.040 - 9234.618: 8.2176% ( 97) 00:12:30.840 9234.618 - 9294.196: 9.2179% ( 121) 00:12:30.840 9294.196 - 9353.775: 10.5159% ( 157) 00:12:30.840 9353.775 - 9413.353: 11.8882% ( 166) 00:12:30.840 9413.353 - 9472.931: 13.5086% ( 196) 00:12:30.840 9472.931 - 9532.509: 15.4431% ( 234) 00:12:30.840 9532.509 - 9592.087: 17.7331% ( 277) 00:12:30.840 9592.087 - 9651.665: 20.2794% ( 308) 00:12:30.840 9651.665 - 9711.244: 22.9332% ( 321) 00:12:30.840 9711.244 - 9770.822: 25.9177% ( 361) 00:12:30.840 9770.822 - 9830.400: 29.2907% ( 408) 00:12:30.840 9830.400 - 9889.978: 32.4487% ( 382) 00:12:30.840 9889.978 - 9949.556: 35.7804% ( 403) 00:12:30.840 9949.556 - 10009.135: 39.0873% ( 400) 00:12:30.840 10009.135 - 10068.713: 42.5017% ( 413) 00:12:30.840 10068.713 - 10128.291: 46.2219% ( 450) 00:12:30.840 10128.291 - 10187.869: 49.9587% ( 452) 00:12:30.840 10187.869 - 10247.447: 53.6624% ( 448) 00:12:30.840 10247.447 - 10307.025: 57.3909% ( 451) 00:12:30.840 10307.025 - 10366.604: 60.9044% ( 425) 00:12:30.840 10366.604 - 10426.182: 64.2526% ( 405) 00:12:30.840 10426.182 - 10485.760: 67.4438% ( 386) 00:12:30.840 10485.760 - 10545.338: 70.2216% ( 336) 00:12:30.840 10545.338 - 10604.916: 72.7596% ( 307) 00:12:30.840 10604.916 - 10664.495: 75.1819% ( 293) 00:12:30.840 10664.495 - 10724.073: 77.3562% ( 263) 00:12:30.840 10724.073 - 10783.651: 79.1419% ( 216) 00:12:30.840 10783.651 - 10843.229: 80.7705% ( 197) 00:12:30.840 10843.229 - 10902.807: 82.2503% ( 179) 00:12:30.840 10902.807 - 10962.385: 83.6392% ( 168) 00:12:30.840 10962.385 - 11021.964: 84.9124% ( 154) 00:12:30.840 11021.964 - 11081.542: 86.0532% ( 138) 00:12:30.840 11081.542 - 11141.120: 87.1114% ( 128) 00:12:30.840 11141.120 - 11200.698: 88.1118% ( 121) 00:12:30.840 11200.698 - 11260.276: 89.0790% ( 117) 00:12:30.840 11260.276 - 11319.855: 89.9306% ( 103) 00:12:30.840 11319.855 - 11379.433: 90.7821% ( 103) 00:12:30.840 11379.433 - 11439.011: 91.5013% ( 87) 00:12:30.840 11439.011 - 11498.589: 91.9312% ( 52) 00:12:30.840 11498.589 - 11558.167: 92.1875% ( 31) 00:12:30.840 11558.167 - 11617.745: 92.4024% ( 26) 00:12:30.840 11617.745 - 11677.324: 92.5430% ( 17) 00:12:30.840 11677.324 - 11736.902: 92.6918% ( 18) 00:12:30.840 11736.902 - 11796.480: 92.8075% ( 14) 00:12:30.840 11796.480 - 11856.058: 92.9398% ( 16) 00:12:30.840 11856.058 - 11915.636: 93.0473% ( 13) 00:12:30.840 11915.636 - 11975.215: 93.1630% ( 14) 00:12:30.840 11975.215 - 12034.793: 93.2870% ( 15) 00:12:30.840 12034.793 - 12094.371: 93.4441% ( 19) 00:12:30.840 12094.371 - 12153.949: 93.5764% ( 16) 00:12:30.840 12153.949 - 12213.527: 93.6921% ( 14) 00:12:30.840 12213.527 - 12273.105: 93.8161% ( 15) 00:12:30.840 12273.105 - 12332.684: 93.9567% ( 17) 00:12:30.841 12332.684 - 12392.262: 94.0807% ( 15) 00:12:30.841 12392.262 - 12451.840: 94.2212% ( 17) 00:12:30.841 12451.840 - 12511.418: 94.3452% ( 15) 00:12:30.841 12511.418 - 12570.996: 94.4692% ( 15) 00:12:30.841 12570.996 - 12630.575: 94.5933% ( 15) 00:12:30.841 12630.575 - 12690.153: 94.7173% ( 15) 00:12:30.841 12690.153 - 12749.731: 94.8578% ( 17) 00:12:30.841 12749.731 - 12809.309: 94.9818% ( 15) 00:12:30.841 12809.309 - 12868.887: 95.1306% ( 18) 00:12:30.841 12868.887 - 12928.465: 95.2794% ( 18) 00:12:30.841 12928.465 - 12988.044: 95.4365% ( 19) 00:12:30.841 12988.044 - 13047.622: 95.5522% ( 14) 00:12:30.841 13047.622 - 13107.200: 95.7176% ( 20) 00:12:30.841 13107.200 - 13166.778: 95.8995% ( 22) 00:12:30.841 13166.778 - 13226.356: 96.0813% ( 22) 00:12:30.841 13226.356 - 13285.935: 96.2550% ( 21) 00:12:30.841 13285.935 - 13345.513: 96.4038% ( 18) 00:12:30.841 13345.513 - 13405.091: 96.5856% ( 22) 00:12:30.841 13405.091 - 13464.669: 96.7179% ( 16) 00:12:30.841 13464.669 - 13524.247: 96.8419% ( 15) 00:12:30.841 13524.247 - 13583.825: 96.9742% ( 16) 00:12:30.841 13583.825 - 13643.404: 97.0982% ( 15) 00:12:30.841 13643.404 - 13702.982: 97.2305% ( 16) 00:12:30.841 13702.982 - 13762.560: 97.3793% ( 18) 00:12:30.841 13762.560 - 13822.138: 97.5364% ( 19) 00:12:30.841 13822.138 - 13881.716: 97.6852% ( 18) 00:12:30.841 13881.716 - 13941.295: 97.8588% ( 21) 00:12:30.841 13941.295 - 14000.873: 98.0076% ( 18) 00:12:30.841 14000.873 - 14060.451: 98.1564% ( 18) 00:12:30.841 14060.451 - 14120.029: 98.2722% ( 14) 00:12:30.841 14120.029 - 14179.607: 98.4044% ( 16) 00:12:30.841 14179.607 - 14239.185: 98.4954% ( 11) 00:12:30.841 14239.185 - 14298.764: 98.5863% ( 11) 00:12:30.841 14298.764 - 14358.342: 98.6607% ( 9) 00:12:30.841 14358.342 - 14417.920: 98.6855% ( 3) 00:12:30.841 14417.920 - 14477.498: 98.7103% ( 3) 00:12:30.841 14477.498 - 14537.076: 98.7434% ( 4) 00:12:30.841 14537.076 - 14596.655: 98.7682% ( 3) 00:12:30.841 14596.655 - 14656.233: 98.7930% ( 3) 00:12:30.841 14656.233 - 14715.811: 98.8178% ( 3) 00:12:30.841 14715.811 - 14775.389: 98.8426% ( 3) 00:12:30.841 14775.389 - 14834.967: 98.8757% ( 4) 00:12:30.841 14834.967 - 14894.545: 98.8839% ( 1) 00:12:30.841 14894.545 - 14954.124: 98.9087% ( 3) 00:12:30.841 14954.124 - 15013.702: 98.9335% ( 3) 00:12:30.841 15013.702 - 15073.280: 98.9418% ( 1) 00:12:30.841 27405.964 - 27525.120: 98.9501% ( 1) 00:12:30.841 27525.120 - 27644.276: 98.9583% ( 1) 00:12:30.841 27644.276 - 27763.433: 98.9914% ( 4) 00:12:30.841 27763.433 - 27882.589: 99.0162% ( 3) 00:12:30.841 27882.589 - 28001.745: 99.0410% ( 3) 00:12:30.841 28001.745 - 28120.902: 99.0658% ( 3) 00:12:30.841 28120.902 - 28240.058: 99.0906% ( 3) 00:12:30.841 28240.058 - 28359.215: 99.1237% ( 4) 00:12:30.841 28359.215 - 28478.371: 99.1485% ( 3) 00:12:30.841 28478.371 - 28597.527: 99.1733% ( 3) 00:12:30.841 28597.527 - 28716.684: 99.1981% ( 3) 00:12:30.841 28716.684 - 28835.840: 99.2229% ( 3) 00:12:30.841 28835.840 - 28954.996: 99.2477% ( 3) 00:12:30.841 28954.996 - 29074.153: 99.2725% ( 3) 00:12:30.841 29074.153 - 29193.309: 99.2973% ( 3) 00:12:30.841 29193.309 - 29312.465: 99.3221% ( 3) 00:12:30.841 29312.465 - 29431.622: 99.3386% ( 2) 00:12:30.841 29431.622 - 29550.778: 99.3634% ( 3) 00:12:30.841 29550.778 - 29669.935: 99.3882% ( 3) 00:12:30.841 29669.935 - 29789.091: 99.4130% ( 3) 00:12:30.841 29789.091 - 29908.247: 99.4378% ( 3) 00:12:30.841 29908.247 - 30027.404: 99.4626% ( 3) 00:12:30.841 30027.404 - 30146.560: 99.4709% ( 1) 00:12:30.841 36461.847 - 36700.160: 99.4792% ( 1) 00:12:30.841 36700.160 - 36938.473: 99.5122% ( 4) 00:12:30.841 36938.473 - 37176.785: 99.5618% ( 6) 00:12:30.841 37176.785 - 37415.098: 99.6032% ( 5) 00:12:30.841 37415.098 - 37653.411: 99.6445% ( 5) 00:12:30.841 37653.411 - 37891.724: 99.6858% ( 5) 00:12:30.841 37891.724 - 38130.036: 99.7354% ( 6) 00:12:30.841 38130.036 - 38368.349: 99.7768% ( 5) 00:12:30.841 38368.349 - 38606.662: 99.8264% ( 6) 00:12:30.841 38606.662 - 38844.975: 99.8677% ( 5) 00:12:30.841 38844.975 - 39083.287: 99.9173% ( 6) 00:12:30.841 39083.287 - 39321.600: 99.9669% ( 6) 00:12:30.841 39321.600 - 39559.913: 100.0000% ( 4) 00:12:30.841 00:12:30.841 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:30.841 ============================================================================== 00:12:30.841 Range in us Cumulative IO count 00:12:30.841 8281.367 - 8340.945: 0.0496% ( 6) 00:12:30.841 8340.945 - 8400.524: 0.1571% ( 13) 00:12:30.841 8400.524 - 8460.102: 0.3142% ( 19) 00:12:30.841 8460.102 - 8519.680: 0.5622% ( 30) 00:12:30.841 8519.680 - 8579.258: 0.8681% ( 37) 00:12:30.841 8579.258 - 8638.836: 1.4220% ( 67) 00:12:30.841 8638.836 - 8698.415: 1.9676% ( 66) 00:12:30.841 8698.415 - 8757.993: 2.5794% ( 74) 00:12:30.841 8757.993 - 8817.571: 3.2325% ( 79) 00:12:30.841 8817.571 - 8877.149: 3.9269% ( 84) 00:12:30.841 8877.149 - 8936.727: 4.6379% ( 86) 00:12:30.841 8936.727 - 8996.305: 5.3406% ( 85) 00:12:30.841 8996.305 - 9055.884: 6.0764% ( 89) 00:12:30.841 9055.884 - 9115.462: 6.8039% ( 88) 00:12:30.841 9115.462 - 9175.040: 7.5314% ( 88) 00:12:30.841 9175.040 - 9234.618: 8.4491% ( 111) 00:12:30.841 9234.618 - 9294.196: 9.4411% ( 120) 00:12:30.841 9294.196 - 9353.775: 10.6399% ( 145) 00:12:30.841 9353.775 - 9413.353: 12.0453% ( 170) 00:12:30.841 9413.353 - 9472.931: 13.6574% ( 195) 00:12:30.841 9472.931 - 9532.509: 15.6250% ( 238) 00:12:30.841 9532.509 - 9592.087: 17.8241% ( 266) 00:12:30.841 9592.087 - 9651.665: 20.4448% ( 317) 00:12:30.841 9651.665 - 9711.244: 23.1812% ( 331) 00:12:30.841 9711.244 - 9770.822: 26.1491% ( 359) 00:12:30.841 9770.822 - 9830.400: 29.2080% ( 370) 00:12:30.841 9830.400 - 9889.978: 32.4901% ( 397) 00:12:30.841 9889.978 - 9949.556: 35.7722% ( 397) 00:12:30.841 9949.556 - 10009.135: 39.1038% ( 403) 00:12:30.841 10009.135 - 10068.713: 42.5678% ( 419) 00:12:30.841 10068.713 - 10128.291: 46.1144% ( 429) 00:12:30.841 10128.291 - 10187.869: 49.8099% ( 447) 00:12:30.841 10187.869 - 10247.447: 53.5549% ( 453) 00:12:30.841 10247.447 - 10307.025: 57.2338% ( 445) 00:12:30.841 10307.025 - 10366.604: 60.8052% ( 432) 00:12:30.841 10366.604 - 10426.182: 64.2444% ( 416) 00:12:30.841 10426.182 - 10485.760: 67.2536% ( 364) 00:12:30.841 10485.760 - 10545.338: 69.9818% ( 330) 00:12:30.841 10545.338 - 10604.916: 72.5033% ( 305) 00:12:30.841 10604.916 - 10664.495: 74.8429% ( 283) 00:12:30.841 10664.495 - 10724.073: 76.9428% ( 254) 00:12:30.841 10724.073 - 10783.651: 78.8029% ( 225) 00:12:30.841 10783.651 - 10843.229: 80.4481% ( 199) 00:12:30.841 10843.229 - 10902.807: 81.9279% ( 179) 00:12:30.841 10902.807 - 10962.385: 83.3499% ( 172) 00:12:30.841 10962.385 - 11021.964: 84.6892% ( 162) 00:12:30.841 11021.964 - 11081.542: 85.8631% ( 142) 00:12:30.841 11081.542 - 11141.120: 86.9709% ( 134) 00:12:30.841 11141.120 - 11200.698: 88.0291% ( 128) 00:12:30.841 11200.698 - 11260.276: 88.9468% ( 111) 00:12:30.841 11260.276 - 11319.855: 89.8231% ( 106) 00:12:30.841 11319.855 - 11379.433: 90.6415% ( 99) 00:12:30.841 11379.433 - 11439.011: 91.3029% ( 80) 00:12:30.841 11439.011 - 11498.589: 91.7989% ( 60) 00:12:30.841 11498.589 - 11558.167: 92.0883% ( 35) 00:12:30.841 11558.167 - 11617.745: 92.2288% ( 17) 00:12:30.841 11617.745 - 11677.324: 92.3776% ( 18) 00:12:30.841 11677.324 - 11736.902: 92.5182% ( 17) 00:12:30.841 11736.902 - 11796.480: 92.6505% ( 16) 00:12:30.841 11796.480 - 11856.058: 92.8241% ( 21) 00:12:30.841 11856.058 - 11915.636: 93.0060% ( 22) 00:12:30.841 11915.636 - 11975.215: 93.1878% ( 22) 00:12:30.841 11975.215 - 12034.793: 93.3697% ( 22) 00:12:30.841 12034.793 - 12094.371: 93.5681% ( 24) 00:12:30.841 12094.371 - 12153.949: 93.7335% ( 20) 00:12:30.841 12153.949 - 12213.527: 93.9153% ( 22) 00:12:30.841 12213.527 - 12273.105: 94.1055% ( 23) 00:12:30.841 12273.105 - 12332.684: 94.2708% ( 20) 00:12:30.841 12332.684 - 12392.262: 94.4196% ( 18) 00:12:30.841 12392.262 - 12451.840: 94.5602% ( 17) 00:12:30.841 12451.840 - 12511.418: 94.7007% ( 17) 00:12:30.841 12511.418 - 12570.996: 94.8495% ( 18) 00:12:30.841 12570.996 - 12630.575: 95.0066% ( 19) 00:12:30.841 12630.575 - 12690.153: 95.1885% ( 22) 00:12:30.841 12690.153 - 12749.731: 95.3456% ( 19) 00:12:30.841 12749.731 - 12809.309: 95.4944% ( 18) 00:12:30.841 12809.309 - 12868.887: 95.6184% ( 15) 00:12:30.841 12868.887 - 12928.465: 95.7176% ( 12) 00:12:30.841 12928.465 - 12988.044: 95.8251% ( 13) 00:12:30.841 12988.044 - 13047.622: 95.9491% ( 15) 00:12:30.841 13047.622 - 13107.200: 96.0483% ( 12) 00:12:30.841 13107.200 - 13166.778: 96.1723% ( 15) 00:12:30.841 13166.778 - 13226.356: 96.2963% ( 15) 00:12:30.841 13226.356 - 13285.935: 96.4368% ( 17) 00:12:30.841 13285.935 - 13345.513: 96.5774% ( 17) 00:12:30.841 13345.513 - 13405.091: 96.7262% ( 18) 00:12:30.841 13405.091 - 13464.669: 96.8254% ( 12) 00:12:30.841 13464.669 - 13524.247: 96.8915% ( 8) 00:12:30.841 13524.247 - 13583.825: 97.0073% ( 14) 00:12:30.841 13583.825 - 13643.404: 97.0982% ( 11) 00:12:30.841 13643.404 - 13702.982: 97.1974% ( 12) 00:12:30.841 13702.982 - 13762.560: 97.3297% ( 16) 00:12:30.841 13762.560 - 13822.138: 97.4868% ( 19) 00:12:30.841 13822.138 - 13881.716: 97.6521% ( 20) 00:12:30.841 13881.716 - 13941.295: 97.7844% ( 16) 00:12:30.841 13941.295 - 14000.873: 97.8753% ( 11) 00:12:30.842 14000.873 - 14060.451: 98.0076% ( 16) 00:12:30.842 14060.451 - 14120.029: 98.1399% ( 16) 00:12:30.842 14120.029 - 14179.607: 98.2474% ( 13) 00:12:30.842 14179.607 - 14239.185: 98.3466% ( 12) 00:12:30.842 14239.185 - 14298.764: 98.3879% ( 5) 00:12:30.842 14298.764 - 14358.342: 98.4375% ( 6) 00:12:30.842 14358.342 - 14417.920: 98.4954% ( 7) 00:12:30.842 14417.920 - 14477.498: 98.5532% ( 7) 00:12:30.842 14477.498 - 14537.076: 98.6028% ( 6) 00:12:30.842 14537.076 - 14596.655: 98.6524% ( 6) 00:12:30.842 14596.655 - 14656.233: 98.7103% ( 7) 00:12:30.842 14656.233 - 14715.811: 98.7599% ( 6) 00:12:30.842 14715.811 - 14775.389: 98.8095% ( 6) 00:12:30.842 14775.389 - 14834.967: 98.8591% ( 6) 00:12:30.842 14834.967 - 14894.545: 98.9087% ( 6) 00:12:30.842 14894.545 - 14954.124: 98.9335% ( 3) 00:12:30.842 14954.124 - 15013.702: 98.9418% ( 1) 00:12:30.842 24665.367 - 24784.524: 98.9501% ( 1) 00:12:30.842 24784.524 - 24903.680: 98.9666% ( 2) 00:12:30.842 24903.680 - 25022.836: 98.9831% ( 2) 00:12:30.842 25022.836 - 25141.993: 99.0079% ( 3) 00:12:30.842 25141.993 - 25261.149: 99.0245% ( 2) 00:12:30.842 25261.149 - 25380.305: 99.0493% ( 3) 00:12:30.842 25380.305 - 25499.462: 99.0741% ( 3) 00:12:30.842 25499.462 - 25618.618: 99.0989% ( 3) 00:12:30.842 25618.618 - 25737.775: 99.1154% ( 2) 00:12:30.842 25737.775 - 25856.931: 99.1402% ( 3) 00:12:30.842 25856.931 - 25976.087: 99.1567% ( 2) 00:12:30.842 25976.087 - 26095.244: 99.1733% ( 2) 00:12:30.842 26095.244 - 26214.400: 99.1981% ( 3) 00:12:30.842 26214.400 - 26333.556: 99.2146% ( 2) 00:12:30.842 26333.556 - 26452.713: 99.2394% ( 3) 00:12:30.842 26452.713 - 26571.869: 99.2642% ( 3) 00:12:30.842 26571.869 - 26691.025: 99.2808% ( 2) 00:12:30.842 26691.025 - 26810.182: 99.3056% ( 3) 00:12:30.842 26810.182 - 26929.338: 99.3221% ( 2) 00:12:30.842 26929.338 - 27048.495: 99.3469% ( 3) 00:12:30.842 27048.495 - 27167.651: 99.3717% ( 3) 00:12:30.842 27167.651 - 27286.807: 99.3882% ( 2) 00:12:30.842 27286.807 - 27405.964: 99.4130% ( 3) 00:12:30.842 27405.964 - 27525.120: 99.4378% ( 3) 00:12:30.842 27525.120 - 27644.276: 99.4544% ( 2) 00:12:30.842 27644.276 - 27763.433: 99.4709% ( 2) 00:12:30.842 33840.407 - 34078.720: 99.5205% ( 6) 00:12:30.842 34078.720 - 34317.033: 99.5618% ( 5) 00:12:30.842 34317.033 - 34555.345: 99.6114% ( 6) 00:12:30.842 34555.345 - 34793.658: 99.6528% ( 5) 00:12:30.842 34793.658 - 35031.971: 99.6941% ( 5) 00:12:30.842 35031.971 - 35270.284: 99.7437% ( 6) 00:12:30.842 35270.284 - 35508.596: 99.7851% ( 5) 00:12:30.842 35508.596 - 35746.909: 99.8347% ( 6) 00:12:30.842 35746.909 - 35985.222: 99.8843% ( 6) 00:12:30.842 35985.222 - 36223.535: 99.9256% ( 5) 00:12:30.842 36223.535 - 36461.847: 99.9752% ( 6) 00:12:30.842 36461.847 - 36700.160: 100.0000% ( 3) 00:12:30.842 00:12:30.842 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:30.842 ============================================================================== 00:12:30.842 Range in us Cumulative IO count 00:12:30.842 8281.367 - 8340.945: 0.0165% ( 2) 00:12:30.842 8340.945 - 8400.524: 0.1736% ( 19) 00:12:30.842 8400.524 - 8460.102: 0.3307% ( 19) 00:12:30.842 8460.102 - 8519.680: 0.5539% ( 27) 00:12:30.842 8519.680 - 8579.258: 0.9755% ( 51) 00:12:30.842 8579.258 - 8638.836: 1.4716% ( 60) 00:12:30.842 8638.836 - 8698.415: 1.9759% ( 61) 00:12:30.842 8698.415 - 8757.993: 2.5959% ( 75) 00:12:30.842 8757.993 - 8817.571: 3.2407% ( 78) 00:12:30.842 8817.571 - 8877.149: 3.9600% ( 87) 00:12:30.842 8877.149 - 8936.727: 4.6296% ( 81) 00:12:30.842 8936.727 - 8996.305: 5.3323% ( 85) 00:12:30.842 8996.305 - 9055.884: 6.0681% ( 89) 00:12:30.842 9055.884 - 9115.462: 6.7460% ( 82) 00:12:30.842 9115.462 - 9175.040: 7.5479% ( 97) 00:12:30.842 9175.040 - 9234.618: 8.4739% ( 112) 00:12:30.842 9234.618 - 9294.196: 9.4163% ( 114) 00:12:30.842 9294.196 - 9353.775: 10.5407% ( 136) 00:12:30.842 9353.775 - 9413.353: 11.9626% ( 172) 00:12:30.842 9413.353 - 9472.931: 13.4921% ( 185) 00:12:30.842 9472.931 - 9532.509: 15.3770% ( 228) 00:12:30.842 9532.509 - 9592.087: 17.6009% ( 269) 00:12:30.842 9592.087 - 9651.665: 20.1306% ( 306) 00:12:30.842 9651.665 - 9711.244: 22.9249% ( 338) 00:12:30.842 9711.244 - 9770.822: 25.9425% ( 365) 00:12:30.842 9770.822 - 9830.400: 29.1171% ( 384) 00:12:30.842 9830.400 - 9889.978: 32.3330% ( 389) 00:12:30.842 9889.978 - 9949.556: 35.6233% ( 398) 00:12:30.842 9949.556 - 10009.135: 39.0129% ( 410) 00:12:30.842 10009.135 - 10068.713: 42.6091% ( 435) 00:12:30.842 10068.713 - 10128.291: 46.3211% ( 449) 00:12:30.842 10128.291 - 10187.869: 49.9917% ( 444) 00:12:30.842 10187.869 - 10247.447: 53.8277% ( 464) 00:12:30.842 10247.447 - 10307.025: 57.4901% ( 443) 00:12:30.842 10307.025 - 10366.604: 61.0450% ( 430) 00:12:30.842 10366.604 - 10426.182: 64.3849% ( 404) 00:12:30.842 10426.182 - 10485.760: 67.3859% ( 363) 00:12:30.842 10485.760 - 10545.338: 70.1389% ( 333) 00:12:30.842 10545.338 - 10604.916: 72.7183% ( 312) 00:12:30.842 10604.916 - 10664.495: 75.0744% ( 285) 00:12:30.842 10664.495 - 10724.073: 77.1247% ( 248) 00:12:30.842 10724.073 - 10783.651: 78.7864% ( 201) 00:12:30.842 10783.651 - 10843.229: 80.3323% ( 187) 00:12:30.842 10843.229 - 10902.807: 81.7130% ( 167) 00:12:30.842 10902.807 - 10962.385: 83.0192% ( 158) 00:12:30.842 10962.385 - 11021.964: 84.2262% ( 146) 00:12:30.842 11021.964 - 11081.542: 85.3753% ( 139) 00:12:30.842 11081.542 - 11141.120: 86.5327% ( 140) 00:12:30.842 11141.120 - 11200.698: 87.5579% ( 124) 00:12:30.842 11200.698 - 11260.276: 88.5003% ( 114) 00:12:30.842 11260.276 - 11319.855: 89.4097% ( 110) 00:12:30.842 11319.855 - 11379.433: 90.2034% ( 96) 00:12:30.842 11379.433 - 11439.011: 90.8399% ( 77) 00:12:30.842 11439.011 - 11498.589: 91.2781% ( 53) 00:12:30.842 11498.589 - 11558.167: 91.6336% ( 43) 00:12:30.842 11558.167 - 11617.745: 91.9229% ( 35) 00:12:30.842 11617.745 - 11677.324: 92.1462% ( 27) 00:12:30.842 11677.324 - 11736.902: 92.3528% ( 25) 00:12:30.842 11736.902 - 11796.480: 92.5678% ( 26) 00:12:30.842 11796.480 - 11856.058: 92.7745% ( 25) 00:12:30.842 11856.058 - 11915.636: 92.9894% ( 26) 00:12:30.842 11915.636 - 11975.215: 93.2044% ( 26) 00:12:30.842 11975.215 - 12034.793: 93.4110% ( 25) 00:12:30.842 12034.793 - 12094.371: 93.6508% ( 29) 00:12:30.842 12094.371 - 12153.949: 93.8740% ( 27) 00:12:30.842 12153.949 - 12213.527: 94.0559% ( 22) 00:12:30.842 12213.527 - 12273.105: 94.2543% ( 24) 00:12:30.842 12273.105 - 12332.684: 94.4362% ( 22) 00:12:30.842 12332.684 - 12392.262: 94.6263% ( 23) 00:12:30.842 12392.262 - 12451.840: 94.8495% ( 27) 00:12:30.842 12451.840 - 12511.418: 95.0397% ( 23) 00:12:30.842 12511.418 - 12570.996: 95.2050% ( 20) 00:12:30.842 12570.996 - 12630.575: 95.3704% ( 20) 00:12:30.842 12630.575 - 12690.153: 95.5357% ( 20) 00:12:30.842 12690.153 - 12749.731: 95.6928% ( 19) 00:12:30.842 12749.731 - 12809.309: 95.8085% ( 14) 00:12:30.842 12809.309 - 12868.887: 95.9160% ( 13) 00:12:30.842 12868.887 - 12928.465: 96.0317% ( 14) 00:12:30.842 12928.465 - 12988.044: 96.1558% ( 15) 00:12:30.842 12988.044 - 13047.622: 96.2963% ( 17) 00:12:30.842 13047.622 - 13107.200: 96.4368% ( 17) 00:12:30.842 13107.200 - 13166.778: 96.5691% ( 16) 00:12:30.842 13166.778 - 13226.356: 96.7097% ( 17) 00:12:30.842 13226.356 - 13285.935: 96.8254% ( 14) 00:12:30.842 13285.935 - 13345.513: 96.9081% ( 10) 00:12:30.842 13345.513 - 13405.091: 96.9825% ( 9) 00:12:30.842 13405.091 - 13464.669: 97.0569% ( 9) 00:12:30.842 13464.669 - 13524.247: 97.1313% ( 9) 00:12:30.842 13524.247 - 13583.825: 97.1974% ( 8) 00:12:30.842 13583.825 - 13643.404: 97.2718% ( 9) 00:12:30.842 13643.404 - 13702.982: 97.3710% ( 12) 00:12:30.842 13702.982 - 13762.560: 97.4785% ( 13) 00:12:30.842 13762.560 - 13822.138: 97.5777% ( 12) 00:12:30.842 13822.138 - 13881.716: 97.6687% ( 11) 00:12:30.842 13881.716 - 13941.295: 97.7761% ( 13) 00:12:30.842 13941.295 - 14000.873: 97.8836% ( 13) 00:12:30.842 14000.873 - 14060.451: 97.9993% ( 14) 00:12:30.842 14060.451 - 14120.029: 98.0985% ( 12) 00:12:30.842 14120.029 - 14179.607: 98.2060% ( 13) 00:12:30.842 14179.607 - 14239.185: 98.2722% ( 8) 00:12:30.842 14239.185 - 14298.764: 98.3218% ( 6) 00:12:30.842 14298.764 - 14358.342: 98.3879% ( 8) 00:12:30.842 14358.342 - 14417.920: 98.4375% ( 6) 00:12:30.842 14417.920 - 14477.498: 98.4871% ( 6) 00:12:30.842 14477.498 - 14537.076: 98.5284% ( 5) 00:12:30.842 14537.076 - 14596.655: 98.5863% ( 7) 00:12:30.842 14596.655 - 14656.233: 98.6359% ( 6) 00:12:30.842 14656.233 - 14715.811: 98.6772% ( 5) 00:12:30.842 14715.811 - 14775.389: 98.7351% ( 7) 00:12:30.842 14775.389 - 14834.967: 98.7847% ( 6) 00:12:30.842 14834.967 - 14894.545: 98.8095% ( 3) 00:12:30.842 14894.545 - 14954.124: 98.8343% ( 3) 00:12:30.842 14954.124 - 15013.702: 98.8674% ( 4) 00:12:30.842 15013.702 - 15073.280: 98.8922% ( 3) 00:12:30.842 15073.280 - 15132.858: 98.9170% ( 3) 00:12:30.842 15132.858 - 15192.436: 98.9418% ( 3) 00:12:30.842 21924.771 - 22043.927: 98.9501% ( 1) 00:12:30.842 22043.927 - 22163.084: 98.9666% ( 2) 00:12:30.842 22163.084 - 22282.240: 98.9914% ( 3) 00:12:30.842 22282.240 - 22401.396: 99.0162% ( 3) 00:12:30.842 22401.396 - 22520.553: 99.0410% ( 3) 00:12:30.842 22520.553 - 22639.709: 99.0658% ( 3) 00:12:30.842 22639.709 - 22758.865: 99.0823% ( 2) 00:12:30.842 22758.865 - 22878.022: 99.0989% ( 2) 00:12:30.842 22878.022 - 22997.178: 99.1237% ( 3) 00:12:30.843 22997.178 - 23116.335: 99.1402% ( 2) 00:12:30.843 23116.335 - 23235.491: 99.1567% ( 2) 00:12:30.843 23235.491 - 23354.647: 99.1733% ( 2) 00:12:30.843 23354.647 - 23473.804: 99.1981% ( 3) 00:12:30.843 23473.804 - 23592.960: 99.2229% ( 3) 00:12:30.843 23592.960 - 23712.116: 99.2394% ( 2) 00:12:30.843 23712.116 - 23831.273: 99.2642% ( 3) 00:12:30.843 23831.273 - 23950.429: 99.2808% ( 2) 00:12:30.843 23950.429 - 24069.585: 99.3056% ( 3) 00:12:30.843 24069.585 - 24188.742: 99.3221% ( 2) 00:12:30.843 24188.742 - 24307.898: 99.3469% ( 3) 00:12:30.843 24307.898 - 24427.055: 99.3634% ( 2) 00:12:30.843 24427.055 - 24546.211: 99.3800% ( 2) 00:12:30.843 24546.211 - 24665.367: 99.4048% ( 3) 00:12:30.843 24665.367 - 24784.524: 99.4213% ( 2) 00:12:30.843 24784.524 - 24903.680: 99.4378% ( 2) 00:12:30.843 24903.680 - 25022.836: 99.4626% ( 3) 00:12:30.843 25022.836 - 25141.993: 99.4709% ( 1) 00:12:30.843 30742.342 - 30980.655: 99.4874% ( 2) 00:12:30.843 30980.655 - 31218.967: 99.5288% ( 5) 00:12:30.843 31218.967 - 31457.280: 99.5784% ( 6) 00:12:30.843 31457.280 - 31695.593: 99.6114% ( 4) 00:12:30.843 31695.593 - 31933.905: 99.6528% ( 5) 00:12:30.843 31933.905 - 32172.218: 99.6941% ( 5) 00:12:30.843 32172.218 - 32410.531: 99.7354% ( 5) 00:12:30.843 32410.531 - 32648.844: 99.7851% ( 6) 00:12:30.843 32648.844 - 32887.156: 99.8347% ( 6) 00:12:30.843 32887.156 - 33125.469: 99.8843% ( 6) 00:12:30.843 33125.469 - 33363.782: 99.9339% ( 6) 00:12:30.843 33363.782 - 33602.095: 99.9752% ( 5) 00:12:30.843 33602.095 - 33840.407: 100.0000% ( 3) 00:12:30.843 00:12:30.843 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:30.843 ============================================================================== 00:12:30.843 Range in us Cumulative IO count 00:12:30.843 8281.367 - 8340.945: 0.0579% ( 7) 00:12:30.843 8340.945 - 8400.524: 0.1736% ( 14) 00:12:30.843 8400.524 - 8460.102: 0.3472% ( 21) 00:12:30.843 8460.102 - 8519.680: 0.5952% ( 30) 00:12:30.843 8519.680 - 8579.258: 1.0251% ( 52) 00:12:30.843 8579.258 - 8638.836: 1.5294% ( 61) 00:12:30.843 8638.836 - 8698.415: 2.0833% ( 67) 00:12:30.843 8698.415 - 8757.993: 2.6786% ( 72) 00:12:30.843 8757.993 - 8817.571: 3.4309% ( 91) 00:12:30.843 8817.571 - 8877.149: 4.1667% ( 89) 00:12:30.843 8877.149 - 8936.727: 4.8859% ( 87) 00:12:30.843 8936.727 - 8996.305: 5.6382% ( 91) 00:12:30.843 8996.305 - 9055.884: 6.3079% ( 81) 00:12:30.843 9055.884 - 9115.462: 7.0602% ( 91) 00:12:30.843 9115.462 - 9175.040: 7.8290% ( 93) 00:12:30.843 9175.040 - 9234.618: 8.7798% ( 115) 00:12:30.843 9234.618 - 9294.196: 9.8132% ( 125) 00:12:30.843 9294.196 - 9353.775: 10.9706% ( 140) 00:12:30.843 9353.775 - 9413.353: 12.3429% ( 166) 00:12:30.843 9413.353 - 9472.931: 13.9220% ( 191) 00:12:30.843 9472.931 - 9532.509: 15.8978% ( 239) 00:12:30.843 9532.509 - 9592.087: 18.1300% ( 270) 00:12:30.843 9592.087 - 9651.665: 20.5605% ( 294) 00:12:30.843 9651.665 - 9711.244: 23.2060% ( 320) 00:12:30.843 9711.244 - 9770.822: 26.0086% ( 339) 00:12:30.843 9770.822 - 9830.400: 28.9517% ( 356) 00:12:30.843 9830.400 - 9889.978: 32.1594% ( 388) 00:12:30.843 9889.978 - 9949.556: 35.3753% ( 389) 00:12:30.843 9949.556 - 10009.135: 38.8145% ( 416) 00:12:30.843 10009.135 - 10068.713: 42.3859% ( 432) 00:12:30.843 10068.713 - 10128.291: 46.0235% ( 440) 00:12:30.843 10128.291 - 10187.869: 49.7685% ( 453) 00:12:30.843 10187.869 - 10247.447: 53.5218% ( 454) 00:12:30.843 10247.447 - 10307.025: 57.0933% ( 432) 00:12:30.843 10307.025 - 10366.604: 60.6233% ( 427) 00:12:30.843 10366.604 - 10426.182: 63.8476% ( 390) 00:12:30.843 10426.182 - 10485.760: 66.9147% ( 371) 00:12:30.843 10485.760 - 10545.338: 69.6759% ( 334) 00:12:30.843 10545.338 - 10604.916: 72.3214% ( 320) 00:12:30.843 10604.916 - 10664.495: 74.8512% ( 306) 00:12:30.843 10664.495 - 10724.073: 76.9841% ( 258) 00:12:30.843 10724.073 - 10783.651: 78.8608% ( 227) 00:12:30.843 10783.651 - 10843.229: 80.3985% ( 186) 00:12:30.843 10843.229 - 10902.807: 81.7378% ( 162) 00:12:30.843 10902.807 - 10962.385: 83.0026% ( 153) 00:12:30.843 10962.385 - 11021.964: 84.2923% ( 156) 00:12:30.843 11021.964 - 11081.542: 85.4745% ( 143) 00:12:30.843 11081.542 - 11141.120: 86.5162% ( 126) 00:12:30.843 11141.120 - 11200.698: 87.5992% ( 131) 00:12:30.843 11200.698 - 11260.276: 88.5665% ( 117) 00:12:30.843 11260.276 - 11319.855: 89.4263% ( 104) 00:12:30.843 11319.855 - 11379.433: 90.1703% ( 90) 00:12:30.843 11379.433 - 11439.011: 90.7573% ( 71) 00:12:30.843 11439.011 - 11498.589: 91.2616% ( 61) 00:12:30.843 11498.589 - 11558.167: 91.5840% ( 39) 00:12:30.843 11558.167 - 11617.745: 91.8568% ( 33) 00:12:30.843 11617.745 - 11677.324: 92.1131% ( 31) 00:12:30.843 11677.324 - 11736.902: 92.3776% ( 32) 00:12:30.843 11736.902 - 11796.480: 92.6174% ( 29) 00:12:30.843 11796.480 - 11856.058: 92.8406% ( 27) 00:12:30.843 11856.058 - 11915.636: 93.0390% ( 24) 00:12:30.843 11915.636 - 11975.215: 93.2374% ( 24) 00:12:30.843 11975.215 - 12034.793: 93.4358% ( 24) 00:12:30.843 12034.793 - 12094.371: 93.6343% ( 24) 00:12:30.843 12094.371 - 12153.949: 93.8575% ( 27) 00:12:30.843 12153.949 - 12213.527: 94.0476% ( 23) 00:12:30.843 12213.527 - 12273.105: 94.2378% ( 23) 00:12:30.843 12273.105 - 12332.684: 94.4114% ( 21) 00:12:30.843 12332.684 - 12392.262: 94.6098% ( 24) 00:12:30.843 12392.262 - 12451.840: 94.7999% ( 23) 00:12:30.843 12451.840 - 12511.418: 94.9735% ( 21) 00:12:30.843 12511.418 - 12570.996: 95.1637% ( 23) 00:12:30.843 12570.996 - 12630.575: 95.3704% ( 25) 00:12:30.843 12630.575 - 12690.153: 95.5522% ( 22) 00:12:30.843 12690.153 - 12749.731: 95.7341% ( 22) 00:12:30.843 12749.731 - 12809.309: 95.8747% ( 17) 00:12:30.843 12809.309 - 12868.887: 96.0400% ( 20) 00:12:30.843 12868.887 - 12928.465: 96.1558% ( 14) 00:12:30.843 12928.465 - 12988.044: 96.2715% ( 14) 00:12:30.843 12988.044 - 13047.622: 96.4120% ( 17) 00:12:30.843 13047.622 - 13107.200: 96.5112% ( 12) 00:12:30.843 13107.200 - 13166.778: 96.5856% ( 9) 00:12:30.843 13166.778 - 13226.356: 96.6766% ( 11) 00:12:30.843 13226.356 - 13285.935: 96.7841% ( 13) 00:12:30.843 13285.935 - 13345.513: 96.8667% ( 10) 00:12:30.843 13345.513 - 13405.091: 96.9494% ( 10) 00:12:30.843 13405.091 - 13464.669: 97.0321% ( 10) 00:12:30.843 13464.669 - 13524.247: 97.1147% ( 10) 00:12:30.843 13524.247 - 13583.825: 97.2057% ( 11) 00:12:30.843 13583.825 - 13643.404: 97.2966% ( 11) 00:12:30.843 13643.404 - 13702.982: 97.3958% ( 12) 00:12:30.843 13702.982 - 13762.560: 97.4868% ( 11) 00:12:30.843 13762.560 - 13822.138: 97.5860% ( 12) 00:12:30.843 13822.138 - 13881.716: 97.6687% ( 10) 00:12:30.843 13881.716 - 13941.295: 97.7513% ( 10) 00:12:30.843 13941.295 - 14000.873: 97.8340% ( 10) 00:12:30.843 14000.873 - 14060.451: 97.9167% ( 10) 00:12:30.843 14060.451 - 14120.029: 97.9993% ( 10) 00:12:30.843 14120.029 - 14179.607: 98.0985% ( 12) 00:12:30.843 14179.607 - 14239.185: 98.1978% ( 12) 00:12:30.843 14239.185 - 14298.764: 98.2804% ( 10) 00:12:30.843 14298.764 - 14358.342: 98.3548% ( 9) 00:12:30.843 14358.342 - 14417.920: 98.4210% ( 8) 00:12:30.843 14417.920 - 14477.498: 98.4788% ( 7) 00:12:30.843 14477.498 - 14537.076: 98.5367% ( 7) 00:12:30.843 14537.076 - 14596.655: 98.5863% ( 6) 00:12:30.843 14596.655 - 14656.233: 98.6442% ( 7) 00:12:30.843 14656.233 - 14715.811: 98.6772% ( 4) 00:12:30.843 14715.811 - 14775.389: 98.7021% ( 3) 00:12:30.843 14775.389 - 14834.967: 98.7269% ( 3) 00:12:30.843 14834.967 - 14894.545: 98.7517% ( 3) 00:12:30.843 14894.545 - 14954.124: 98.7765% ( 3) 00:12:30.844 14954.124 - 15013.702: 98.8013% ( 3) 00:12:30.844 15013.702 - 15073.280: 98.8261% ( 3) 00:12:30.844 15073.280 - 15132.858: 98.8591% ( 4) 00:12:30.844 15132.858 - 15192.436: 98.8839% ( 3) 00:12:30.844 15192.436 - 15252.015: 98.9087% ( 3) 00:12:30.844 15252.015 - 15371.171: 98.9418% ( 4) 00:12:30.844 19303.331 - 19422.487: 98.9583% ( 2) 00:12:30.844 19422.487 - 19541.644: 98.9831% ( 3) 00:12:30.844 19541.644 - 19660.800: 98.9997% ( 2) 00:12:30.844 19660.800 - 19779.956: 99.0245% ( 3) 00:12:30.844 19779.956 - 19899.113: 99.0410% ( 2) 00:12:30.844 19899.113 - 20018.269: 99.0658% ( 3) 00:12:30.844 20018.269 - 20137.425: 99.0906% ( 3) 00:12:30.844 20137.425 - 20256.582: 99.1071% ( 2) 00:12:30.844 20256.582 - 20375.738: 99.1237% ( 2) 00:12:30.844 20375.738 - 20494.895: 99.1485% ( 3) 00:12:30.844 20494.895 - 20614.051: 99.1733% ( 3) 00:12:30.844 20614.051 - 20733.207: 99.1981% ( 3) 00:12:30.844 20733.207 - 20852.364: 99.2229% ( 3) 00:12:30.844 20852.364 - 20971.520: 99.2477% ( 3) 00:12:30.844 20971.520 - 21090.676: 99.2642% ( 2) 00:12:30.844 21090.676 - 21209.833: 99.2890% ( 3) 00:12:30.844 21209.833 - 21328.989: 99.3138% ( 3) 00:12:30.844 21328.989 - 21448.145: 99.3386% ( 3) 00:12:30.844 21448.145 - 21567.302: 99.3634% ( 3) 00:12:30.844 21567.302 - 21686.458: 99.3800% ( 2) 00:12:30.844 21686.458 - 21805.615: 99.4048% ( 3) 00:12:30.844 21805.615 - 21924.771: 99.4296% ( 3) 00:12:30.844 21924.771 - 22043.927: 99.4544% ( 3) 00:12:30.844 22043.927 - 22163.084: 99.4709% ( 2) 00:12:30.844 28001.745 - 28120.902: 99.4957% ( 3) 00:12:30.844 28120.902 - 28240.058: 99.5122% ( 2) 00:12:30.844 28240.058 - 28359.215: 99.5370% ( 3) 00:12:30.844 28359.215 - 28478.371: 99.5536% ( 2) 00:12:30.844 28478.371 - 28597.527: 99.5784% ( 3) 00:12:30.844 28597.527 - 28716.684: 99.6032% ( 3) 00:12:30.844 28716.684 - 28835.840: 99.6280% ( 3) 00:12:30.844 28835.840 - 28954.996: 99.6528% ( 3) 00:12:30.844 28954.996 - 29074.153: 99.6693% ( 2) 00:12:30.844 29074.153 - 29193.309: 99.6858% ( 2) 00:12:30.844 29193.309 - 29312.465: 99.7106% ( 3) 00:12:30.844 29312.465 - 29431.622: 99.7354% ( 3) 00:12:30.844 29431.622 - 29550.778: 99.7603% ( 3) 00:12:30.844 29550.778 - 29669.935: 99.7768% ( 2) 00:12:30.844 29669.935 - 29789.091: 99.8016% ( 3) 00:12:30.844 29789.091 - 29908.247: 99.8181% ( 2) 00:12:30.844 29908.247 - 30027.404: 99.8429% ( 3) 00:12:30.844 30027.404 - 30146.560: 99.8677% ( 3) 00:12:30.844 30146.560 - 30265.716: 99.8925% ( 3) 00:12:30.844 30265.716 - 30384.873: 99.9173% ( 3) 00:12:30.844 30384.873 - 30504.029: 99.9421% ( 3) 00:12:30.844 30504.029 - 30742.342: 99.9835% ( 5) 00:12:30.844 30742.342 - 30980.655: 100.0000% ( 2) 00:12:30.844 00:12:30.844 17:03:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:32.220 Initializing NVMe Controllers 00:12:32.220 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:32.220 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:32.220 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:32.220 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:32.220 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:32.220 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:32.220 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:32.220 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:32.220 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:32.220 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:32.220 Initialization complete. Launching workers. 00:12:32.220 ======================================================== 00:12:32.220 Latency(us) 00:12:32.220 Device Information : IOPS MiB/s Average min max 00:12:32.220 PCIE (0000:00:10.0) NSID 1 from core 0: 7892.43 92.49 16274.59 10653.84 46456.95 00:12:32.220 PCIE (0000:00:11.0) NSID 1 from core 0: 7892.43 92.49 16246.84 11030.91 43998.47 00:12:32.220 PCIE (0000:00:13.0) NSID 1 from core 0: 7892.43 92.49 16219.30 10710.85 42669.75 00:12:32.220 PCIE (0000:00:12.0) NSID 1 from core 0: 7892.43 92.49 16191.67 10704.44 40273.61 00:12:32.220 PCIE (0000:00:12.0) NSID 2 from core 0: 7956.07 93.24 16034.28 10861.31 31854.75 00:12:32.220 PCIE (0000:00:12.0) NSID 3 from core 0: 7956.07 93.24 16006.52 10883.87 29209.40 00:12:32.220 ======================================================== 00:12:32.220 Total : 47481.85 556.43 16161.82 10653.84 46456.95 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11021.964us 00:12:32.220 10.00000% : 11617.745us 00:12:32.220 25.00000% : 12094.371us 00:12:32.220 50.00000% : 13166.778us 00:12:32.220 75.00000% : 22520.553us 00:12:32.220 90.00000% : 24427.055us 00:12:32.220 95.00000% : 25499.462us 00:12:32.220 98.00000% : 26452.713us 00:12:32.220 99.00000% : 38368.349us 00:12:32.220 99.50000% : 45279.418us 00:12:32.220 99.90000% : 46232.669us 00:12:32.220 99.99000% : 46470.982us 00:12:32.220 99.99900% : 46470.982us 00:12:32.220 99.99990% : 46470.982us 00:12:32.220 99.99999% : 46470.982us 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11319.855us 00:12:32.220 10.00000% : 11736.902us 00:12:32.220 25.00000% : 12094.371us 00:12:32.220 50.00000% : 13226.356us 00:12:32.220 75.00000% : 22997.178us 00:12:32.220 90.00000% : 24188.742us 00:12:32.220 95.00000% : 24784.524us 00:12:32.220 98.00000% : 25856.931us 00:12:32.220 99.00000% : 35746.909us 00:12:32.220 99.50000% : 42896.291us 00:12:32.220 99.90000% : 43849.542us 00:12:32.220 99.99000% : 44087.855us 00:12:32.220 99.99900% : 44087.855us 00:12:32.220 99.99990% : 44087.855us 00:12:32.220 99.99999% : 44087.855us 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11141.120us 00:12:32.220 10.00000% : 11677.324us 00:12:32.220 25.00000% : 12094.371us 00:12:32.220 50.00000% : 13226.356us 00:12:32.220 75.00000% : 22997.178us 00:12:32.220 90.00000% : 24188.742us 00:12:32.220 95.00000% : 24903.680us 00:12:32.220 98.00000% : 26095.244us 00:12:32.220 99.00000% : 34317.033us 00:12:32.220 99.50000% : 41466.415us 00:12:32.220 99.90000% : 42657.978us 00:12:32.220 99.99000% : 42896.291us 00:12:32.220 99.99900% : 42896.291us 00:12:32.220 99.99990% : 42896.291us 00:12:32.220 99.99999% : 42896.291us 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11141.120us 00:12:32.220 10.00000% : 11736.902us 00:12:32.220 25.00000% : 12153.949us 00:12:32.220 50.00000% : 13285.935us 00:12:32.220 75.00000% : 22997.178us 00:12:32.220 90.00000% : 24188.742us 00:12:32.220 95.00000% : 24784.524us 00:12:32.220 98.00000% : 25976.087us 00:12:32.220 99.00000% : 31695.593us 00:12:32.220 99.50000% : 39083.287us 00:12:32.220 99.90000% : 40036.538us 00:12:32.220 99.99000% : 40274.851us 00:12:32.220 99.99900% : 40274.851us 00:12:32.220 99.99990% : 40274.851us 00:12:32.220 99.99999% : 40274.851us 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11200.698us 00:12:32.220 10.00000% : 11736.902us 00:12:32.220 25.00000% : 12094.371us 00:12:32.220 50.00000% : 13285.935us 00:12:32.220 75.00000% : 22997.178us 00:12:32.220 90.00000% : 24069.585us 00:12:32.220 95.00000% : 24665.367us 00:12:32.220 98.00000% : 25499.462us 00:12:32.220 99.00000% : 26214.400us 00:12:32.220 99.50000% : 30504.029us 00:12:32.220 99.90000% : 31695.593us 00:12:32.220 99.99000% : 31933.905us 00:12:32.220 99.99900% : 31933.905us 00:12:32.220 99.99990% : 31933.905us 00:12:32.220 99.99999% : 31933.905us 00:12:32.220 00:12:32.220 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:32.220 ================================================================================= 00:12:32.220 1.00000% : 11260.276us 00:12:32.220 10.00000% : 11677.324us 00:12:32.220 25.00000% : 12094.371us 00:12:32.220 50.00000% : 13226.356us 00:12:32.220 75.00000% : 22878.022us 00:12:32.220 90.00000% : 24188.742us 00:12:32.220 95.00000% : 24665.367us 00:12:32.220 98.00000% : 25499.462us 00:12:32.220 99.00000% : 26095.244us 00:12:32.220 99.50000% : 28001.745us 00:12:32.220 99.90000% : 29074.153us 00:12:32.220 99.99000% : 29312.465us 00:12:32.220 99.99900% : 29312.465us 00:12:32.220 99.99990% : 29312.465us 00:12:32.220 99.99999% : 29312.465us 00:12:32.220 00:12:32.220 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:32.220 ============================================================================== 00:12:32.220 Range in us Cumulative IO count 00:12:32.220 10604.916 - 10664.495: 0.0126% ( 1) 00:12:32.220 10664.495 - 10724.073: 0.0504% ( 3) 00:12:32.220 10724.073 - 10783.651: 0.1638% ( 9) 00:12:32.220 10783.651 - 10843.229: 0.2394% ( 6) 00:12:32.220 10843.229 - 10902.807: 0.3150% ( 6) 00:12:32.220 10902.807 - 10962.385: 0.4662% ( 12) 00:12:32.220 10962.385 - 11021.964: 1.0207% ( 44) 00:12:32.220 11021.964 - 11081.542: 1.3609% ( 27) 00:12:32.220 11081.542 - 11141.120: 1.9153% ( 44) 00:12:32.220 11141.120 - 11200.698: 2.6588% ( 59) 00:12:32.220 11200.698 - 11260.276: 3.4148% ( 60) 00:12:32.220 11260.276 - 11319.855: 4.0323% ( 49) 00:12:32.220 11319.855 - 11379.433: 4.7253% ( 55) 00:12:32.220 11379.433 - 11439.011: 5.7712% ( 83) 00:12:32.220 11439.011 - 11498.589: 7.2707% ( 119) 00:12:32.220 11498.589 - 11558.167: 8.5811% ( 104) 00:12:32.220 11558.167 - 11617.745: 10.0428% ( 116) 00:12:32.221 11617.745 - 11677.324: 12.0842% ( 162) 00:12:32.221 11677.324 - 11736.902: 13.6971% ( 128) 00:12:32.221 11736.902 - 11796.480: 15.3226% ( 129) 00:12:32.221 11796.480 - 11856.058: 17.3891% ( 164) 00:12:32.221 11856.058 - 11915.636: 19.3926% ( 159) 00:12:32.221 11915.636 - 11975.215: 21.1694% ( 141) 00:12:32.221 11975.215 - 12034.793: 23.3871% ( 176) 00:12:32.221 12034.793 - 12094.371: 25.2520% ( 148) 00:12:32.221 12094.371 - 12153.949: 27.0413% ( 142) 00:12:32.221 12153.949 - 12213.527: 28.8180% ( 141) 00:12:32.221 12213.527 - 12273.105: 30.5192% ( 135) 00:12:32.221 12273.105 - 12332.684: 31.8674% ( 107) 00:12:32.221 12332.684 - 12392.262: 33.1275% ( 100) 00:12:32.221 12392.262 - 12451.840: 34.5010% ( 109) 00:12:32.221 12451.840 - 12511.418: 35.7989% ( 103) 00:12:32.221 12511.418 - 12570.996: 37.2354% ( 114) 00:12:32.221 12570.996 - 12630.575: 38.7349% ( 119) 00:12:32.221 12630.575 - 12690.153: 40.2470% ( 120) 00:12:32.221 12690.153 - 12749.731: 41.8977% ( 131) 00:12:32.221 12749.731 - 12809.309: 43.2964% ( 111) 00:12:32.221 12809.309 - 12868.887: 44.4682% ( 93) 00:12:32.221 12868.887 - 12928.465: 45.6401% ( 93) 00:12:32.221 12928.465 - 12988.044: 47.1648% ( 121) 00:12:32.221 12988.044 - 13047.622: 48.4501% ( 102) 00:12:32.221 13047.622 - 13107.200: 49.5590% ( 88) 00:12:32.221 13107.200 - 13166.778: 50.7056% ( 91) 00:12:32.221 13166.778 - 13226.356: 51.7011% ( 79) 00:12:32.221 13226.356 - 13285.935: 52.7218% ( 81) 00:12:32.221 13285.935 - 13345.513: 53.6542% ( 74) 00:12:32.221 13345.513 - 13405.091: 54.8513% ( 95) 00:12:32.221 13405.091 - 13464.669: 56.0610% ( 96) 00:12:32.221 13464.669 - 13524.247: 56.9052% ( 67) 00:12:32.221 13524.247 - 13583.825: 57.5605% ( 52) 00:12:32.221 13583.825 - 13643.404: 58.3039% ( 59) 00:12:32.221 13643.404 - 13702.982: 59.0978% ( 63) 00:12:32.221 13702.982 - 13762.560: 59.7530% ( 52) 00:12:32.221 13762.560 - 13822.138: 60.3831% ( 50) 00:12:32.221 13822.138 - 13881.716: 61.0257% ( 51) 00:12:32.221 13881.716 - 13941.295: 61.6935% ( 53) 00:12:32.221 13941.295 - 14000.873: 62.1724% ( 38) 00:12:32.221 14000.873 - 14060.451: 62.6512% ( 38) 00:12:32.221 14060.451 - 14120.029: 63.2182% ( 45) 00:12:32.221 14120.029 - 14179.607: 63.7727% ( 44) 00:12:32.221 14179.607 - 14239.185: 64.3145% ( 43) 00:12:32.221 14239.185 - 14298.764: 64.6043% ( 23) 00:12:32.221 14298.764 - 14358.342: 64.8690% ( 21) 00:12:32.221 14358.342 - 14417.920: 65.1840% ( 25) 00:12:32.221 14417.920 - 14477.498: 65.4360% ( 20) 00:12:32.221 14477.498 - 14537.076: 65.6502% ( 17) 00:12:32.221 14537.076 - 14596.655: 65.8266% ( 14) 00:12:32.221 14596.655 - 14656.233: 65.9778% ( 12) 00:12:32.221 14656.233 - 14715.811: 66.1164% ( 11) 00:12:32.221 14715.811 - 14775.389: 66.2298% ( 9) 00:12:32.221 14775.389 - 14834.967: 66.3810% ( 12) 00:12:32.221 14834.967 - 14894.545: 66.5197% ( 11) 00:12:32.221 14894.545 - 14954.124: 66.6457% ( 10) 00:12:32.221 14954.124 - 15013.702: 66.8851% ( 19) 00:12:32.221 15013.702 - 15073.280: 67.0741% ( 15) 00:12:32.221 15073.280 - 15132.858: 67.1749% ( 8) 00:12:32.221 15132.858 - 15192.436: 67.2001% ( 2) 00:12:32.221 15192.436 - 15252.015: 67.2127% ( 1) 00:12:32.221 15252.015 - 15371.171: 67.2379% ( 2) 00:12:32.221 15371.171 - 15490.327: 67.2631% ( 2) 00:12:32.221 15490.327 - 15609.484: 67.3135% ( 4) 00:12:32.221 15609.484 - 15728.640: 67.4017% ( 7) 00:12:32.221 15728.640 - 15847.796: 67.5151% ( 9) 00:12:32.221 15847.796 - 15966.953: 67.6285% ( 9) 00:12:32.221 15966.953 - 16086.109: 67.7545% ( 10) 00:12:32.221 16086.109 - 16205.265: 67.9183% ( 13) 00:12:32.221 16205.265 - 16324.422: 68.0948% ( 14) 00:12:32.221 16324.422 - 16443.578: 68.2586% ( 13) 00:12:32.221 16443.578 - 16562.735: 68.3720% ( 9) 00:12:32.221 16562.735 - 16681.891: 68.4980% ( 10) 00:12:32.221 16681.891 - 16801.047: 68.5862% ( 7) 00:12:32.221 16801.047 - 16920.204: 68.6618% ( 6) 00:12:32.221 16920.204 - 17039.360: 68.7878% ( 10) 00:12:32.221 17039.360 - 17158.516: 68.8760% ( 7) 00:12:32.221 17158.516 - 17277.673: 69.0146% ( 11) 00:12:32.221 17277.673 - 17396.829: 69.1784% ( 13) 00:12:32.221 17396.829 - 17515.985: 69.2666% ( 7) 00:12:32.221 17515.985 - 17635.142: 69.3296% ( 5) 00:12:32.221 17635.142 - 17754.298: 69.3926% ( 5) 00:12:32.221 17754.298 - 17873.455: 69.4430% ( 4) 00:12:32.221 17873.455 - 17992.611: 69.5186% ( 6) 00:12:32.221 17992.611 - 18111.767: 69.6069% ( 7) 00:12:32.221 18111.767 - 18230.924: 69.6825% ( 6) 00:12:32.221 18230.924 - 18350.080: 69.7077% ( 2) 00:12:32.221 18350.080 - 18469.236: 69.7455% ( 3) 00:12:32.221 18469.236 - 18588.393: 69.7959% ( 4) 00:12:32.221 18588.393 - 18707.549: 69.9723% ( 14) 00:12:32.221 18707.549 - 18826.705: 70.0353% ( 5) 00:12:32.221 18826.705 - 18945.862: 70.1613% ( 10) 00:12:32.221 18945.862 - 19065.018: 70.2243% ( 5) 00:12:32.221 19065.018 - 19184.175: 70.3251% ( 8) 00:12:32.221 19184.175 - 19303.331: 70.4133% ( 7) 00:12:32.221 19303.331 - 19422.487: 70.5141% ( 8) 00:12:32.221 19422.487 - 19541.644: 70.6023% ( 7) 00:12:32.221 19541.644 - 19660.800: 70.7157% ( 9) 00:12:32.221 19660.800 - 19779.956: 70.8039% ( 7) 00:12:32.221 19779.956 - 19899.113: 70.8921% ( 7) 00:12:32.221 19899.113 - 20018.269: 70.9047% ( 1) 00:12:32.221 20018.269 - 20137.425: 70.9551% ( 4) 00:12:32.221 20137.425 - 20256.582: 71.0181% ( 5) 00:12:32.221 20256.582 - 20375.738: 71.0938% ( 6) 00:12:32.221 20375.738 - 20494.895: 71.1442% ( 4) 00:12:32.221 20494.895 - 20614.051: 71.1820% ( 3) 00:12:32.221 20614.051 - 20733.207: 71.1946% ( 1) 00:12:32.221 20733.207 - 20852.364: 71.2954% ( 8) 00:12:32.221 20852.364 - 20971.520: 71.3584% ( 5) 00:12:32.221 20971.520 - 21090.676: 71.4088% ( 4) 00:12:32.221 21090.676 - 21209.833: 71.4970% ( 7) 00:12:32.221 21209.833 - 21328.989: 71.5600% ( 5) 00:12:32.221 21328.989 - 21448.145: 71.5852% ( 2) 00:12:32.221 21448.145 - 21567.302: 71.6230% ( 3) 00:12:32.221 21567.302 - 21686.458: 71.9002% ( 22) 00:12:32.221 21686.458 - 21805.615: 72.1396% ( 19) 00:12:32.221 21805.615 - 21924.771: 72.4798% ( 27) 00:12:32.221 21924.771 - 22043.927: 72.9209% ( 35) 00:12:32.221 22043.927 - 22163.084: 73.3367% ( 33) 00:12:32.221 22163.084 - 22282.240: 73.7525% ( 33) 00:12:32.221 22282.240 - 22401.396: 74.5086% ( 60) 00:12:32.221 22401.396 - 22520.553: 75.6048% ( 87) 00:12:32.221 22520.553 - 22639.709: 76.7389% ( 90) 00:12:32.221 22639.709 - 22758.865: 77.5832% ( 67) 00:12:32.221 22758.865 - 22878.022: 78.5030% ( 73) 00:12:32.221 22878.022 - 22997.178: 79.2843% ( 62) 00:12:32.221 22997.178 - 23116.335: 80.2167% ( 74) 00:12:32.221 23116.335 - 23235.491: 80.9224% ( 56) 00:12:32.221 23235.491 - 23354.647: 81.8674% ( 75) 00:12:32.221 23354.647 - 23473.804: 82.6991% ( 66) 00:12:32.221 23473.804 - 23592.960: 83.5811% ( 70) 00:12:32.221 23592.960 - 23712.116: 84.5136% ( 74) 00:12:32.221 23712.116 - 23831.273: 85.6729% ( 92) 00:12:32.221 23831.273 - 23950.429: 86.7440% ( 85) 00:12:32.221 23950.429 - 24069.585: 87.7142% ( 77) 00:12:32.221 24069.585 - 24188.742: 88.8609% ( 91) 00:12:32.221 24188.742 - 24307.898: 89.7177% ( 68) 00:12:32.221 24307.898 - 24427.055: 90.4864% ( 61) 00:12:32.221 24427.055 - 24546.211: 91.0660% ( 46) 00:12:32.221 24546.211 - 24665.367: 91.6709% ( 48) 00:12:32.221 24665.367 - 24784.524: 92.0993% ( 34) 00:12:32.221 24784.524 - 24903.680: 92.5277% ( 34) 00:12:32.221 24903.680 - 25022.836: 93.1704% ( 51) 00:12:32.221 25022.836 - 25141.993: 93.6996% ( 42) 00:12:32.221 25141.993 - 25261.149: 94.3044% ( 48) 00:12:32.221 25261.149 - 25380.305: 94.7581% ( 36) 00:12:32.221 25380.305 - 25499.462: 95.2369% ( 38) 00:12:32.221 25499.462 - 25618.618: 95.6905% ( 36) 00:12:32.221 25618.618 - 25737.775: 96.1064% ( 33) 00:12:32.221 25737.775 - 25856.931: 96.6104% ( 40) 00:12:32.221 25856.931 - 25976.087: 97.0640% ( 36) 00:12:32.221 25976.087 - 26095.244: 97.3664% ( 24) 00:12:32.221 26095.244 - 26214.400: 97.6562% ( 23) 00:12:32.221 26214.400 - 26333.556: 97.9209% ( 21) 00:12:32.221 26333.556 - 26452.713: 98.0469% ( 10) 00:12:32.221 26452.713 - 26571.869: 98.1225% ( 6) 00:12:32.221 26571.869 - 26691.025: 98.2611% ( 11) 00:12:32.221 26691.025 - 26810.182: 98.3493% ( 7) 00:12:32.221 26810.182 - 26929.338: 98.3871% ( 3) 00:12:32.221 36461.847 - 36700.160: 98.4753% ( 7) 00:12:32.221 36700.160 - 36938.473: 98.5509% ( 6) 00:12:32.221 36938.473 - 37176.785: 98.6265% ( 6) 00:12:32.221 37176.785 - 37415.098: 98.7147% ( 7) 00:12:32.221 37415.098 - 37653.411: 98.7903% ( 6) 00:12:32.221 37653.411 - 37891.724: 98.8911% ( 8) 00:12:32.221 37891.724 - 38130.036: 98.9919% ( 8) 00:12:32.221 38130.036 - 38368.349: 99.0675% ( 6) 00:12:32.221 38368.349 - 38606.662: 99.1179% ( 4) 00:12:32.221 38606.662 - 38844.975: 99.1935% ( 6) 00:12:32.221 44087.855 - 44326.167: 99.2566% ( 5) 00:12:32.221 44326.167 - 44564.480: 99.3322% ( 6) 00:12:32.221 44564.480 - 44802.793: 99.4078% ( 6) 00:12:32.221 44802.793 - 45041.105: 99.4960% ( 7) 00:12:32.221 45041.105 - 45279.418: 99.5842% ( 7) 00:12:32.221 45279.418 - 45517.731: 99.6472% ( 5) 00:12:32.221 45517.731 - 45756.044: 99.7480% ( 8) 00:12:32.221 45756.044 - 45994.356: 99.8236% ( 6) 00:12:32.221 45994.356 - 46232.669: 99.9244% ( 8) 00:12:32.221 46232.669 - 46470.982: 100.0000% ( 6) 00:12:32.221 00:12:32.221 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:32.222 ============================================================================== 00:12:32.222 Range in us Cumulative IO count 00:12:32.222 11021.964 - 11081.542: 0.0504% ( 4) 00:12:32.222 11081.542 - 11141.120: 0.1134% ( 5) 00:12:32.222 11141.120 - 11200.698: 0.2646% ( 12) 00:12:32.222 11200.698 - 11260.276: 0.5418% ( 22) 00:12:32.222 11260.276 - 11319.855: 1.3987% ( 68) 00:12:32.222 11319.855 - 11379.433: 2.1295% ( 58) 00:12:32.222 11379.433 - 11439.011: 3.0620% ( 74) 00:12:32.222 11439.011 - 11498.589: 4.3095% ( 99) 00:12:32.222 11498.589 - 11558.167: 6.2500% ( 154) 00:12:32.222 11558.167 - 11617.745: 7.8755% ( 129) 00:12:32.222 11617.745 - 11677.324: 9.5136% ( 130) 00:12:32.222 11677.324 - 11736.902: 11.4163% ( 151) 00:12:32.222 11736.902 - 11796.480: 13.2686% ( 147) 00:12:32.222 11796.480 - 11856.058: 15.7006% ( 193) 00:12:32.222 11856.058 - 11915.636: 18.2712% ( 204) 00:12:32.222 11915.636 - 11975.215: 20.3755% ( 167) 00:12:32.222 11975.215 - 12034.793: 22.6815% ( 183) 00:12:32.222 12034.793 - 12094.371: 25.0378% ( 187) 00:12:32.222 12094.371 - 12153.949: 27.1421% ( 167) 00:12:32.222 12153.949 - 12213.527: 29.3725% ( 177) 00:12:32.222 12213.527 - 12273.105: 31.3130% ( 154) 00:12:32.222 12273.105 - 12332.684: 33.2031% ( 150) 00:12:32.222 12332.684 - 12392.262: 34.5766% ( 109) 00:12:32.222 12392.262 - 12451.840: 36.3407% ( 140) 00:12:32.222 12451.840 - 12511.418: 37.6260% ( 102) 00:12:32.222 12511.418 - 12570.996: 38.9617% ( 106) 00:12:32.222 12570.996 - 12630.575: 40.0832% ( 89) 00:12:32.222 12630.575 - 12690.153: 41.4315% ( 107) 00:12:32.222 12690.153 - 12749.731: 42.4521% ( 81) 00:12:32.222 12749.731 - 12809.309: 43.3594% ( 72) 00:12:32.222 12809.309 - 12868.887: 44.1658% ( 64) 00:12:32.222 12868.887 - 12928.465: 45.0353% ( 69) 00:12:32.222 12928.465 - 12988.044: 45.9803% ( 75) 00:12:32.222 12988.044 - 13047.622: 47.0010% ( 81) 00:12:32.222 13047.622 - 13107.200: 48.0595% ( 84) 00:12:32.222 13107.200 - 13166.778: 49.3322% ( 101) 00:12:32.222 13166.778 - 13226.356: 50.6300% ( 103) 00:12:32.222 13226.356 - 13285.935: 51.8523% ( 97) 00:12:32.222 13285.935 - 13345.513: 52.9360% ( 86) 00:12:32.222 13345.513 - 13405.091: 54.3347% ( 111) 00:12:32.222 13405.091 - 13464.669: 55.6956% ( 108) 00:12:32.222 13464.669 - 13524.247: 57.1069% ( 112) 00:12:32.222 13524.247 - 13583.825: 58.4929% ( 110) 00:12:32.222 13583.825 - 13643.404: 59.6900% ( 95) 00:12:32.222 13643.404 - 13702.982: 60.8241% ( 90) 00:12:32.222 13702.982 - 13762.560: 61.7188% ( 71) 00:12:32.222 13762.560 - 13822.138: 62.3866% ( 53) 00:12:32.222 13822.138 - 13881.716: 62.9158% ( 42) 00:12:32.222 13881.716 - 13941.295: 63.3191% ( 32) 00:12:32.222 13941.295 - 14000.873: 63.7853% ( 37) 00:12:32.222 14000.873 - 14060.451: 64.1507% ( 29) 00:12:32.222 14060.451 - 14120.029: 64.4405% ( 23) 00:12:32.222 14120.029 - 14179.607: 64.6547% ( 17) 00:12:32.222 14179.607 - 14239.185: 64.8185% ( 13) 00:12:32.222 14239.185 - 14298.764: 65.0202% ( 16) 00:12:32.222 14298.764 - 14358.342: 65.3478% ( 26) 00:12:32.222 14358.342 - 14417.920: 65.8014% ( 36) 00:12:32.222 14417.920 - 14477.498: 66.0534% ( 20) 00:12:32.222 14477.498 - 14537.076: 66.1920% ( 11) 00:12:32.222 14537.076 - 14596.655: 66.2802% ( 7) 00:12:32.222 14596.655 - 14656.233: 66.3684% ( 7) 00:12:32.222 14656.233 - 14715.811: 66.4567% ( 7) 00:12:32.222 14715.811 - 14775.389: 66.5449% ( 7) 00:12:32.222 14775.389 - 14834.967: 66.6205% ( 6) 00:12:32.222 14834.967 - 14894.545: 66.6709% ( 4) 00:12:32.222 14894.545 - 14954.124: 66.7591% ( 7) 00:12:32.222 14954.124 - 15013.702: 66.8347% ( 6) 00:12:32.222 15013.702 - 15073.280: 66.8977% ( 5) 00:12:32.222 15073.280 - 15132.858: 66.9355% ( 3) 00:12:32.222 15132.858 - 15192.436: 66.9607% ( 2) 00:12:32.222 15192.436 - 15252.015: 66.9985% ( 3) 00:12:32.222 15252.015 - 15371.171: 67.0363% ( 3) 00:12:32.222 15371.171 - 15490.327: 67.0867% ( 4) 00:12:32.222 15490.327 - 15609.484: 67.1749% ( 7) 00:12:32.222 15609.484 - 15728.640: 67.2253% ( 4) 00:12:32.222 15728.640 - 15847.796: 67.2757% ( 4) 00:12:32.222 15847.796 - 15966.953: 67.4395% ( 13) 00:12:32.222 15966.953 - 16086.109: 67.6159% ( 14) 00:12:32.222 16086.109 - 16205.265: 67.8553% ( 19) 00:12:32.222 16205.265 - 16324.422: 68.0444% ( 15) 00:12:32.222 16324.422 - 16443.578: 68.1326% ( 7) 00:12:32.222 16443.578 - 16562.735: 68.1704% ( 3) 00:12:32.222 16562.735 - 16681.891: 68.1956% ( 2) 00:12:32.222 16681.891 - 16801.047: 68.2334% ( 3) 00:12:32.222 16801.047 - 16920.204: 68.3090% ( 6) 00:12:32.222 16920.204 - 17039.360: 68.4098% ( 8) 00:12:32.222 17039.360 - 17158.516: 68.4854% ( 6) 00:12:32.222 17158.516 - 17277.673: 68.5736% ( 7) 00:12:32.222 17277.673 - 17396.829: 68.6492% ( 6) 00:12:32.222 17396.829 - 17515.985: 68.7500% ( 8) 00:12:32.222 17515.985 - 17635.142: 68.8256% ( 6) 00:12:32.222 17635.142 - 17754.298: 68.9264% ( 8) 00:12:32.222 17754.298 - 17873.455: 69.0398% ( 9) 00:12:32.222 17873.455 - 17992.611: 69.1406% ( 8) 00:12:32.222 17992.611 - 18111.767: 69.2666% ( 10) 00:12:32.222 18111.767 - 18230.924: 69.4178% ( 12) 00:12:32.222 18230.924 - 18350.080: 69.6699% ( 20) 00:12:32.222 18350.080 - 18469.236: 69.9849% ( 25) 00:12:32.222 18469.236 - 18588.393: 70.2243% ( 19) 00:12:32.222 18588.393 - 18707.549: 70.2999% ( 6) 00:12:32.222 18707.549 - 18826.705: 70.4385% ( 11) 00:12:32.222 18826.705 - 18945.862: 70.5645% ( 10) 00:12:32.222 18945.862 - 19065.018: 70.6653% ( 8) 00:12:32.222 19065.018 - 19184.175: 70.7661% ( 8) 00:12:32.222 19184.175 - 19303.331: 70.8291% ( 5) 00:12:32.222 19303.331 - 19422.487: 70.8669% ( 3) 00:12:32.222 19422.487 - 19541.644: 70.8921% ( 2) 00:12:32.222 19541.644 - 19660.800: 70.9173% ( 2) 00:12:32.222 19660.800 - 19779.956: 70.9551% ( 3) 00:12:32.222 19779.956 - 19899.113: 70.9677% ( 1) 00:12:32.222 21209.833 - 21328.989: 70.9929% ( 2) 00:12:32.222 21328.989 - 21448.145: 71.0559% ( 5) 00:12:32.222 21448.145 - 21567.302: 71.1316% ( 6) 00:12:32.222 21567.302 - 21686.458: 71.2450% ( 9) 00:12:32.222 21686.458 - 21805.615: 71.3962% ( 12) 00:12:32.222 21805.615 - 21924.771: 71.6860% ( 23) 00:12:32.222 21924.771 - 22043.927: 71.8750% ( 15) 00:12:32.222 22043.927 - 22163.084: 72.1396% ( 21) 00:12:32.222 22163.084 - 22282.240: 72.4924% ( 28) 00:12:32.222 22282.240 - 22401.396: 72.7949% ( 24) 00:12:32.222 22401.396 - 22520.553: 73.2737% ( 38) 00:12:32.222 22520.553 - 22639.709: 73.7399% ( 37) 00:12:32.222 22639.709 - 22758.865: 74.2188% ( 38) 00:12:32.222 22758.865 - 22878.022: 74.8866% ( 53) 00:12:32.222 22878.022 - 22997.178: 75.8947% ( 80) 00:12:32.222 22997.178 - 23116.335: 76.9909% ( 87) 00:12:32.222 23116.335 - 23235.491: 77.8856% ( 71) 00:12:32.222 23235.491 - 23354.647: 79.2213% ( 106) 00:12:32.222 23354.647 - 23473.804: 80.2419% ( 81) 00:12:32.222 23473.804 - 23592.960: 82.0691% ( 145) 00:12:32.222 23592.960 - 23712.116: 83.8332% ( 140) 00:12:32.222 23712.116 - 23831.273: 85.2319% ( 111) 00:12:32.222 23831.273 - 23950.429: 86.4541% ( 97) 00:12:32.222 23950.429 - 24069.585: 88.3695% ( 152) 00:12:32.222 24069.585 - 24188.742: 90.4234% ( 163) 00:12:32.222 24188.742 - 24307.898: 91.4567% ( 82) 00:12:32.222 24307.898 - 24427.055: 93.1074% ( 131) 00:12:32.222 24427.055 - 24546.211: 93.8382% ( 58) 00:12:32.222 24546.211 - 24665.367: 94.4682% ( 50) 00:12:32.222 24665.367 - 24784.524: 95.3251% ( 68) 00:12:32.222 24784.524 - 24903.680: 95.7787% ( 36) 00:12:32.222 24903.680 - 25022.836: 96.3458% ( 45) 00:12:32.222 25022.836 - 25141.993: 96.7868% ( 35) 00:12:32.222 25141.993 - 25261.149: 97.1522% ( 29) 00:12:32.222 25261.149 - 25380.305: 97.3790% ( 18) 00:12:32.222 25380.305 - 25499.462: 97.6058% ( 18) 00:12:32.222 25499.462 - 25618.618: 97.7823% ( 14) 00:12:32.222 25618.618 - 25737.775: 97.9083% ( 10) 00:12:32.222 25737.775 - 25856.931: 98.0091% ( 8) 00:12:32.222 25856.931 - 25976.087: 98.0973% ( 7) 00:12:32.222 25976.087 - 26095.244: 98.2233% ( 10) 00:12:32.222 26095.244 - 26214.400: 98.2989% ( 6) 00:12:32.222 26214.400 - 26333.556: 98.3493% ( 4) 00:12:32.222 26333.556 - 26452.713: 98.3745% ( 2) 00:12:32.222 26452.713 - 26571.869: 98.3871% ( 1) 00:12:32.222 33840.407 - 34078.720: 98.4123% ( 2) 00:12:32.222 34078.720 - 34317.033: 98.5005% ( 7) 00:12:32.222 34317.033 - 34555.345: 98.5887% ( 7) 00:12:32.222 34555.345 - 34793.658: 98.6769% ( 7) 00:12:32.222 34793.658 - 35031.971: 98.7651% ( 7) 00:12:32.222 35031.971 - 35270.284: 98.8533% ( 7) 00:12:32.222 35270.284 - 35508.596: 98.9415% ( 7) 00:12:32.222 35508.596 - 35746.909: 99.0423% ( 8) 00:12:32.223 35746.909 - 35985.222: 99.1305% ( 7) 00:12:32.223 35985.222 - 36223.535: 99.1935% ( 5) 00:12:32.223 41943.040 - 42181.353: 99.2944% ( 8) 00:12:32.223 42181.353 - 42419.665: 99.3826% ( 7) 00:12:32.223 42419.665 - 42657.978: 99.4708% ( 7) 00:12:32.223 42657.978 - 42896.291: 99.5716% ( 8) 00:12:32.223 42896.291 - 43134.604: 99.6598% ( 7) 00:12:32.223 43134.604 - 43372.916: 99.7480% ( 7) 00:12:32.223 43372.916 - 43611.229: 99.8362% ( 7) 00:12:32.223 43611.229 - 43849.542: 99.9370% ( 8) 00:12:32.223 43849.542 - 44087.855: 100.0000% ( 5) 00:12:32.223 00:12:32.223 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:32.223 ============================================================================== 00:12:32.223 Range in us Cumulative IO count 00:12:32.223 10664.495 - 10724.073: 0.0126% ( 1) 00:12:32.223 10783.651 - 10843.229: 0.0252% ( 1) 00:12:32.223 10843.229 - 10902.807: 0.0756% ( 4) 00:12:32.223 10902.807 - 10962.385: 0.2646% ( 15) 00:12:32.223 10962.385 - 11021.964: 0.5040% ( 19) 00:12:32.223 11021.964 - 11081.542: 0.8443% ( 27) 00:12:32.223 11081.542 - 11141.120: 1.2727% ( 34) 00:12:32.223 11141.120 - 11200.698: 2.1673% ( 71) 00:12:32.223 11200.698 - 11260.276: 2.6588% ( 39) 00:12:32.223 11260.276 - 11319.855: 3.2762% ( 49) 00:12:32.223 11319.855 - 11379.433: 3.9945% ( 57) 00:12:32.223 11379.433 - 11439.011: 4.7001% ( 56) 00:12:32.223 11439.011 - 11498.589: 5.7838% ( 86) 00:12:32.223 11498.589 - 11558.167: 7.0565% ( 101) 00:12:32.223 11558.167 - 11617.745: 9.0978% ( 162) 00:12:32.223 11617.745 - 11677.324: 10.3705% ( 101) 00:12:32.223 11677.324 - 11736.902: 11.9582% ( 126) 00:12:32.223 11736.902 - 11796.480: 13.6467% ( 134) 00:12:32.223 11796.480 - 11856.058: 15.7258% ( 165) 00:12:32.223 11856.058 - 11915.636: 17.7797% ( 163) 00:12:32.223 11915.636 - 11975.215: 20.1991% ( 192) 00:12:32.223 11975.215 - 12034.793: 22.7949% ( 206) 00:12:32.223 12034.793 - 12094.371: 25.2772% ( 197) 00:12:32.223 12094.371 - 12153.949: 27.6336% ( 187) 00:12:32.223 12153.949 - 12213.527: 29.2969% ( 132) 00:12:32.223 12213.527 - 12273.105: 31.2122% ( 152) 00:12:32.223 12273.105 - 12332.684: 33.0897% ( 149) 00:12:32.223 12332.684 - 12392.262: 34.5010% ( 112) 00:12:32.223 12392.262 - 12451.840: 35.6855% ( 94) 00:12:32.223 12451.840 - 12511.418: 36.9330% ( 99) 00:12:32.223 12511.418 - 12570.996: 38.1552% ( 97) 00:12:32.223 12570.996 - 12630.575: 39.2389% ( 86) 00:12:32.223 12630.575 - 12690.153: 40.5872% ( 107) 00:12:32.223 12690.153 - 12749.731: 41.5953% ( 80) 00:12:32.223 12749.731 - 12809.309: 42.9435% ( 107) 00:12:32.223 12809.309 - 12868.887: 43.9138% ( 77) 00:12:32.223 12868.887 - 12928.465: 44.9975% ( 86) 00:12:32.223 12928.465 - 12988.044: 45.9677% ( 77) 00:12:32.223 12988.044 - 13047.622: 47.0262% ( 84) 00:12:32.223 13047.622 - 13107.200: 48.2737% ( 99) 00:12:32.223 13107.200 - 13166.778: 49.7228% ( 115) 00:12:32.223 13166.778 - 13226.356: 51.1089% ( 110) 00:12:32.223 13226.356 - 13285.935: 52.4320% ( 105) 00:12:32.223 13285.935 - 13345.513: 53.9693% ( 122) 00:12:32.223 13345.513 - 13405.091: 55.3931% ( 113) 00:12:32.223 13405.091 - 13464.669: 56.6154% ( 97) 00:12:32.223 13464.669 - 13524.247: 57.5479% ( 74) 00:12:32.223 13524.247 - 13583.825: 58.5181% ( 77) 00:12:32.223 13583.825 - 13643.404: 59.4884% ( 77) 00:12:32.223 13643.404 - 13702.982: 60.3201% ( 66) 00:12:32.223 13702.982 - 13762.560: 61.0257% ( 56) 00:12:32.223 13762.560 - 13822.138: 61.6935% ( 53) 00:12:32.223 13822.138 - 13881.716: 62.2354% ( 43) 00:12:32.223 13881.716 - 13941.295: 62.7142% ( 38) 00:12:32.223 13941.295 - 14000.873: 63.1930% ( 38) 00:12:32.223 14000.873 - 14060.451: 63.7097% ( 41) 00:12:32.223 14060.451 - 14120.029: 64.1759% ( 37) 00:12:32.223 14120.029 - 14179.607: 64.6925% ( 41) 00:12:32.223 14179.607 - 14239.185: 64.9950% ( 24) 00:12:32.223 14239.185 - 14298.764: 65.2596% ( 21) 00:12:32.223 14298.764 - 14358.342: 65.5116% ( 20) 00:12:32.223 14358.342 - 14417.920: 65.7258% ( 17) 00:12:32.223 14417.920 - 14477.498: 65.9652% ( 19) 00:12:32.223 14477.498 - 14537.076: 66.1290% ( 13) 00:12:32.223 14537.076 - 14596.655: 66.2802% ( 12) 00:12:32.223 14596.655 - 14656.233: 66.3936% ( 9) 00:12:32.223 14656.233 - 14715.811: 66.4567% ( 5) 00:12:32.223 14715.811 - 14775.389: 66.5323% ( 6) 00:12:32.223 14775.389 - 14834.967: 66.6205% ( 7) 00:12:32.223 14834.967 - 14894.545: 66.6835% ( 5) 00:12:32.223 14894.545 - 14954.124: 66.7465% ( 5) 00:12:32.223 14954.124 - 15013.702: 66.8095% ( 5) 00:12:32.223 15013.702 - 15073.280: 66.8473% ( 3) 00:12:32.223 15073.280 - 15132.858: 66.8851% ( 3) 00:12:32.223 15132.858 - 15192.436: 66.8977% ( 1) 00:12:32.223 15192.436 - 15252.015: 66.9607% ( 5) 00:12:32.223 15252.015 - 15371.171: 67.0237% ( 5) 00:12:32.223 15371.171 - 15490.327: 67.1119% ( 7) 00:12:32.223 15490.327 - 15609.484: 67.2883% ( 14) 00:12:32.223 15609.484 - 15728.640: 67.4899% ( 16) 00:12:32.223 15728.640 - 15847.796: 67.5907% ( 8) 00:12:32.223 15847.796 - 15966.953: 67.6789% ( 7) 00:12:32.223 15966.953 - 16086.109: 67.7923% ( 9) 00:12:32.223 16086.109 - 16205.265: 67.8931% ( 8) 00:12:32.223 16205.265 - 16324.422: 68.0192% ( 10) 00:12:32.223 16324.422 - 16443.578: 68.1452% ( 10) 00:12:32.223 16443.578 - 16562.735: 68.2208% ( 6) 00:12:32.223 16562.735 - 16681.891: 68.2838% ( 5) 00:12:32.223 16681.891 - 16801.047: 68.3720% ( 7) 00:12:32.223 16801.047 - 16920.204: 68.4224% ( 4) 00:12:32.223 16920.204 - 17039.360: 68.4854% ( 5) 00:12:32.223 17039.360 - 17158.516: 68.5736% ( 7) 00:12:32.223 17158.516 - 17277.673: 68.6492% ( 6) 00:12:32.223 17277.673 - 17396.829: 68.7248% ( 6) 00:12:32.223 17396.829 - 17515.985: 68.7752% ( 4) 00:12:32.223 17515.985 - 17635.142: 68.8760% ( 8) 00:12:32.223 17635.142 - 17754.298: 69.0398% ( 13) 00:12:32.223 17754.298 - 17873.455: 69.4304% ( 31) 00:12:32.223 17873.455 - 17992.611: 69.6195% ( 15) 00:12:32.223 17992.611 - 18111.767: 69.7833% ( 13) 00:12:32.223 18111.767 - 18230.924: 69.9219% ( 11) 00:12:32.223 18230.924 - 18350.080: 70.0353% ( 9) 00:12:32.223 18350.080 - 18469.236: 70.0983% ( 5) 00:12:32.223 18469.236 - 18588.393: 70.1739% ( 6) 00:12:32.223 18588.393 - 18707.549: 70.2495% ( 6) 00:12:32.223 18707.549 - 18826.705: 70.3377% ( 7) 00:12:32.223 18826.705 - 18945.862: 70.4259% ( 7) 00:12:32.223 18945.862 - 19065.018: 70.5015% ( 6) 00:12:32.223 19065.018 - 19184.175: 70.5771% ( 6) 00:12:32.223 19184.175 - 19303.331: 70.6527% ( 6) 00:12:32.223 19303.331 - 19422.487: 70.7787% ( 10) 00:12:32.223 19422.487 - 19541.644: 70.8921% ( 9) 00:12:32.223 19541.644 - 19660.800: 70.9425% ( 4) 00:12:32.223 19660.800 - 19779.956: 70.9677% ( 2) 00:12:32.223 21209.833 - 21328.989: 71.0433% ( 6) 00:12:32.223 21328.989 - 21448.145: 71.1568% ( 9) 00:12:32.223 21448.145 - 21567.302: 71.2450% ( 7) 00:12:32.223 21567.302 - 21686.458: 71.3332% ( 7) 00:12:32.223 21686.458 - 21805.615: 71.4088% ( 6) 00:12:32.223 21805.615 - 21924.771: 71.5348% ( 10) 00:12:32.223 21924.771 - 22043.927: 71.6734% ( 11) 00:12:32.223 22043.927 - 22163.084: 71.8246% ( 12) 00:12:32.223 22163.084 - 22282.240: 72.1144% ( 23) 00:12:32.223 22282.240 - 22401.396: 72.5050% ( 31) 00:12:32.223 22401.396 - 22520.553: 73.0595% ( 44) 00:12:32.223 22520.553 - 22639.709: 73.5131% ( 36) 00:12:32.223 22639.709 - 22758.865: 73.9415% ( 34) 00:12:32.223 22758.865 - 22878.022: 74.7228% ( 62) 00:12:32.223 22878.022 - 22997.178: 75.5418% ( 65) 00:12:32.223 22997.178 - 23116.335: 76.7263% ( 94) 00:12:32.223 23116.335 - 23235.491: 77.8856% ( 92) 00:12:32.223 23235.491 - 23354.647: 79.2213% ( 106) 00:12:32.223 23354.647 - 23473.804: 80.2293% ( 80) 00:12:32.223 23473.804 - 23592.960: 81.5398% ( 104) 00:12:32.223 23592.960 - 23712.116: 83.5181% ( 157) 00:12:32.223 23712.116 - 23831.273: 85.4839% ( 156) 00:12:32.223 23831.273 - 23950.429: 87.0968% ( 128) 00:12:32.223 23950.429 - 24069.585: 89.1885% ( 166) 00:12:32.223 24069.585 - 24188.742: 90.9778% ( 142) 00:12:32.223 24188.742 - 24307.898: 91.9103% ( 74) 00:12:32.223 24307.898 - 24427.055: 92.6663% ( 60) 00:12:32.223 24427.055 - 24546.211: 93.5610% ( 71) 00:12:32.223 24546.211 - 24665.367: 94.3296% ( 61) 00:12:32.223 24665.367 - 24784.524: 94.8967% ( 45) 00:12:32.223 24784.524 - 24903.680: 95.4133% ( 41) 00:12:32.223 24903.680 - 25022.836: 95.8795% ( 37) 00:12:32.223 25022.836 - 25141.993: 96.2324% ( 28) 00:12:32.223 25141.993 - 25261.149: 96.6608% ( 34) 00:12:32.223 25261.149 - 25380.305: 96.9884% ( 26) 00:12:32.223 25380.305 - 25499.462: 97.2530% ( 21) 00:12:32.223 25499.462 - 25618.618: 97.4546% ( 16) 00:12:32.223 25618.618 - 25737.775: 97.6058% ( 12) 00:12:32.223 25737.775 - 25856.931: 97.7319% ( 10) 00:12:32.223 25856.931 - 25976.087: 97.9335% ( 16) 00:12:32.223 25976.087 - 26095.244: 98.0343% ( 8) 00:12:32.223 26095.244 - 26214.400: 98.0847% ( 4) 00:12:32.223 26214.400 - 26333.556: 98.1603% ( 6) 00:12:32.223 26333.556 - 26452.713: 98.2359% ( 6) 00:12:32.223 26452.713 - 26571.869: 98.2737% ( 3) 00:12:32.223 26571.869 - 26691.025: 98.3115% ( 3) 00:12:32.223 26691.025 - 26810.182: 98.3619% ( 4) 00:12:32.223 26810.182 - 26929.338: 98.3871% ( 2) 00:12:32.223 32410.531 - 32648.844: 98.4501% ( 5) 00:12:32.223 32648.844 - 32887.156: 98.5383% ( 7) 00:12:32.223 32887.156 - 33125.469: 98.6265% ( 7) 00:12:32.223 33125.469 - 33363.782: 98.7273% ( 8) 00:12:32.224 33363.782 - 33602.095: 98.8155% ( 7) 00:12:32.224 33602.095 - 33840.407: 98.9037% ( 7) 00:12:32.224 33840.407 - 34078.720: 98.9919% ( 7) 00:12:32.224 34078.720 - 34317.033: 99.0801% ( 7) 00:12:32.224 34317.033 - 34555.345: 99.1809% ( 8) 00:12:32.224 34555.345 - 34793.658: 99.1935% ( 1) 00:12:32.224 40513.164 - 40751.476: 99.2566% ( 5) 00:12:32.224 40751.476 - 40989.789: 99.3448% ( 7) 00:12:32.224 40989.789 - 41228.102: 99.4330% ( 7) 00:12:32.224 41228.102 - 41466.415: 99.5212% ( 7) 00:12:32.224 41466.415 - 41704.727: 99.6220% ( 8) 00:12:32.224 41704.727 - 41943.040: 99.7102% ( 7) 00:12:32.224 41943.040 - 42181.353: 99.7984% ( 7) 00:12:32.224 42181.353 - 42419.665: 99.8992% ( 8) 00:12:32.224 42419.665 - 42657.978: 99.9874% ( 7) 00:12:32.224 42657.978 - 42896.291: 100.0000% ( 1) 00:12:32.224 00:12:32.224 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:32.224 ============================================================================== 00:12:32.224 Range in us Cumulative IO count 00:12:32.224 10664.495 - 10724.073: 0.0252% ( 2) 00:12:32.224 10724.073 - 10783.651: 0.0756% ( 4) 00:12:32.224 10783.651 - 10843.229: 0.1260% ( 4) 00:12:32.224 10843.229 - 10902.807: 0.2268% ( 8) 00:12:32.224 10902.807 - 10962.385: 0.3906% ( 13) 00:12:32.224 10962.385 - 11021.964: 0.6930% ( 24) 00:12:32.224 11021.964 - 11081.542: 0.9073% ( 17) 00:12:32.224 11081.542 - 11141.120: 1.2349% ( 26) 00:12:32.224 11141.120 - 11200.698: 1.5373% ( 24) 00:12:32.224 11200.698 - 11260.276: 1.9405% ( 32) 00:12:32.224 11260.276 - 11319.855: 2.5580% ( 49) 00:12:32.224 11319.855 - 11379.433: 3.4022% ( 67) 00:12:32.224 11379.433 - 11439.011: 4.6371% ( 98) 00:12:32.224 11439.011 - 11498.589: 6.0988% ( 116) 00:12:32.224 11498.589 - 11558.167: 7.5605% ( 116) 00:12:32.224 11558.167 - 11617.745: 8.9466% ( 110) 00:12:32.224 11617.745 - 11677.324: 9.9798% ( 82) 00:12:32.224 11677.324 - 11736.902: 11.2525% ( 101) 00:12:32.224 11736.902 - 11796.480: 13.2686% ( 160) 00:12:32.224 11796.480 - 11856.058: 15.3856% ( 168) 00:12:32.224 11856.058 - 11915.636: 17.5025% ( 168) 00:12:32.224 11915.636 - 11975.215: 20.0101% ( 199) 00:12:32.224 11975.215 - 12034.793: 22.0136% ( 159) 00:12:32.224 12034.793 - 12094.371: 24.7102% ( 214) 00:12:32.224 12094.371 - 12153.949: 26.9657% ( 179) 00:12:32.224 12153.949 - 12213.527: 29.2843% ( 184) 00:12:32.224 12213.527 - 12273.105: 31.2248% ( 154) 00:12:32.224 12273.105 - 12332.684: 33.2283% ( 159) 00:12:32.224 12332.684 - 12392.262: 35.0302% ( 143) 00:12:32.224 12392.262 - 12451.840: 36.2777% ( 99) 00:12:32.224 12451.840 - 12511.418: 37.3866% ( 88) 00:12:32.224 12511.418 - 12570.996: 38.7601% ( 109) 00:12:32.224 12570.996 - 12630.575: 39.9446% ( 94) 00:12:32.224 12630.575 - 12690.153: 40.8770% ( 74) 00:12:32.224 12690.153 - 12749.731: 41.6835% ( 64) 00:12:32.224 12749.731 - 12809.309: 42.4647% ( 62) 00:12:32.224 12809.309 - 12868.887: 43.3090% ( 67) 00:12:32.224 12868.887 - 12928.465: 44.3674% ( 84) 00:12:32.224 12928.465 - 12988.044: 45.3629% ( 79) 00:12:32.224 12988.044 - 13047.622: 46.4970% ( 90) 00:12:32.224 13047.622 - 13107.200: 47.5554% ( 84) 00:12:32.224 13107.200 - 13166.778: 48.5383% ( 78) 00:12:32.224 13166.778 - 13226.356: 49.5842% ( 83) 00:12:32.224 13226.356 - 13285.935: 50.6804% ( 87) 00:12:32.224 13285.935 - 13345.513: 51.9909% ( 104) 00:12:32.224 13345.513 - 13405.091: 53.3518% ( 108) 00:12:32.224 13405.091 - 13464.669: 54.7505% ( 111) 00:12:32.224 13464.669 - 13524.247: 55.9854% ( 98) 00:12:32.224 13524.247 - 13583.825: 57.2455% ( 100) 00:12:32.224 13583.825 - 13643.404: 58.6064% ( 108) 00:12:32.224 13643.404 - 13702.982: 59.6018% ( 79) 00:12:32.224 13702.982 - 13762.560: 60.3453% ( 59) 00:12:32.224 13762.560 - 13822.138: 61.1139% ( 61) 00:12:32.224 13822.138 - 13881.716: 61.9708% ( 68) 00:12:32.224 13881.716 - 13941.295: 62.7016% ( 58) 00:12:32.224 13941.295 - 14000.873: 63.1804% ( 38) 00:12:32.224 14000.873 - 14060.451: 63.6845% ( 40) 00:12:32.224 14060.451 - 14120.029: 64.1003% ( 33) 00:12:32.224 14120.029 - 14179.607: 64.3649% ( 21) 00:12:32.224 14179.607 - 14239.185: 64.5413% ( 14) 00:12:32.224 14239.185 - 14298.764: 64.8059% ( 21) 00:12:32.224 14298.764 - 14358.342: 65.1336% ( 26) 00:12:32.224 14358.342 - 14417.920: 65.4108% ( 22) 00:12:32.224 14417.920 - 14477.498: 65.6628% ( 20) 00:12:32.224 14477.498 - 14537.076: 65.9022% ( 19) 00:12:32.224 14537.076 - 14596.655: 66.1542% ( 20) 00:12:32.224 14596.655 - 14656.233: 66.3936% ( 19) 00:12:32.224 14656.233 - 14715.811: 66.6583% ( 21) 00:12:32.224 14715.811 - 14775.389: 66.7969% ( 11) 00:12:32.224 14775.389 - 14834.967: 66.8851% ( 7) 00:12:32.224 14834.967 - 14894.545: 66.9985% ( 9) 00:12:32.224 14894.545 - 14954.124: 67.0741% ( 6) 00:12:32.224 14954.124 - 15013.702: 67.1623% ( 7) 00:12:32.224 15013.702 - 15073.280: 67.2127% ( 4) 00:12:32.224 15073.280 - 15132.858: 67.2757% ( 5) 00:12:32.224 15132.858 - 15192.436: 67.3387% ( 5) 00:12:32.224 15192.436 - 15252.015: 67.4017% ( 5) 00:12:32.224 15252.015 - 15371.171: 67.4647% ( 5) 00:12:32.224 15371.171 - 15490.327: 67.4899% ( 2) 00:12:32.224 15490.327 - 15609.484: 67.5277% ( 3) 00:12:32.224 15609.484 - 15728.640: 67.5655% ( 3) 00:12:32.224 15728.640 - 15847.796: 67.6033% ( 3) 00:12:32.224 15847.796 - 15966.953: 67.6537% ( 4) 00:12:32.224 15966.953 - 16086.109: 67.7419% ( 7) 00:12:32.224 16086.109 - 16205.265: 67.8427% ( 8) 00:12:32.224 16205.265 - 16324.422: 67.9435% ( 8) 00:12:32.224 16324.422 - 16443.578: 68.0066% ( 5) 00:12:32.224 16443.578 - 16562.735: 68.1452% ( 11) 00:12:32.224 16562.735 - 16681.891: 68.2838% ( 11) 00:12:32.224 16681.891 - 16801.047: 68.4350% ( 12) 00:12:32.224 16801.047 - 16920.204: 68.5988% ( 13) 00:12:32.224 16920.204 - 17039.360: 68.8634% ( 21) 00:12:32.224 17039.360 - 17158.516: 68.9516% ( 7) 00:12:32.224 17158.516 - 17277.673: 69.0146% ( 5) 00:12:32.224 17277.673 - 17396.829: 69.0776% ( 5) 00:12:32.224 17396.829 - 17515.985: 69.1532% ( 6) 00:12:32.224 17515.985 - 17635.142: 69.2792% ( 10) 00:12:32.224 17635.142 - 17754.298: 69.3674% ( 7) 00:12:32.224 17754.298 - 17873.455: 69.4304% ( 5) 00:12:32.224 17873.455 - 17992.611: 69.5060% ( 6) 00:12:32.224 17992.611 - 18111.767: 69.5817% ( 6) 00:12:32.224 18111.767 - 18230.924: 69.6447% ( 5) 00:12:32.224 18230.924 - 18350.080: 69.6825% ( 3) 00:12:32.224 18350.080 - 18469.236: 69.7329% ( 4) 00:12:32.224 18469.236 - 18588.393: 69.7833% ( 4) 00:12:32.224 18588.393 - 18707.549: 69.8337% ( 4) 00:12:32.224 18707.549 - 18826.705: 69.8841% ( 4) 00:12:32.224 18826.705 - 18945.862: 69.9723% ( 7) 00:12:32.224 18945.862 - 19065.018: 70.0857% ( 9) 00:12:32.224 19065.018 - 19184.175: 70.2117% ( 10) 00:12:32.224 19184.175 - 19303.331: 70.2747% ( 5) 00:12:32.224 19303.331 - 19422.487: 70.3251% ( 4) 00:12:32.224 19422.487 - 19541.644: 70.3755% ( 4) 00:12:32.224 19541.644 - 19660.800: 70.4133% ( 3) 00:12:32.224 19660.800 - 19779.956: 70.4511% ( 3) 00:12:32.224 19779.956 - 19899.113: 70.5015% ( 4) 00:12:32.224 19899.113 - 20018.269: 70.5267% ( 2) 00:12:32.224 20018.269 - 20137.425: 70.5645% ( 3) 00:12:32.224 20137.425 - 20256.582: 70.6023% ( 3) 00:12:32.224 20256.582 - 20375.738: 70.6653% ( 5) 00:12:32.224 20375.738 - 20494.895: 70.8165% ( 12) 00:12:32.224 20494.895 - 20614.051: 71.0938% ( 22) 00:12:32.224 20614.051 - 20733.207: 71.1946% ( 8) 00:12:32.224 20733.207 - 20852.364: 71.2828% ( 7) 00:12:32.224 20852.364 - 20971.520: 71.3710% ( 7) 00:12:32.224 20971.520 - 21090.676: 71.4088% ( 3) 00:12:32.224 21090.676 - 21209.833: 71.4592% ( 4) 00:12:32.224 21209.833 - 21328.989: 71.4970% ( 3) 00:12:32.224 21328.989 - 21448.145: 71.6104% ( 9) 00:12:32.224 21448.145 - 21567.302: 71.8246% ( 17) 00:12:32.224 21567.302 - 21686.458: 72.0514% ( 18) 00:12:32.224 21686.458 - 21805.615: 72.2026% ( 12) 00:12:32.224 21805.615 - 21924.771: 72.2908% ( 7) 00:12:32.224 21924.771 - 22043.927: 72.4168% ( 10) 00:12:32.224 22043.927 - 22163.084: 72.5806% ( 13) 00:12:32.224 22163.084 - 22282.240: 72.7319% ( 12) 00:12:32.224 22282.240 - 22401.396: 72.9587% ( 18) 00:12:32.224 22401.396 - 22520.553: 73.1981% ( 19) 00:12:32.224 22520.553 - 22639.709: 73.3997% ( 16) 00:12:32.224 22639.709 - 22758.865: 73.8911% ( 39) 00:12:32.224 22758.865 - 22878.022: 74.4582% ( 45) 00:12:32.224 22878.022 - 22997.178: 75.0756% ( 49) 00:12:32.224 22997.178 - 23116.335: 75.8821% ( 64) 00:12:32.224 23116.335 - 23235.491: 76.9783% ( 87) 00:12:32.224 23235.491 - 23354.647: 78.3392% ( 108) 00:12:32.224 23354.647 - 23473.804: 79.6371% ( 103) 00:12:32.224 23473.804 - 23592.960: 81.1618% ( 121) 00:12:32.224 23592.960 - 23712.116: 82.6739% ( 120) 00:12:32.224 23712.116 - 23831.273: 85.3075% ( 209) 00:12:32.224 23831.273 - 23950.429: 87.7394% ( 193) 00:12:32.224 23950.429 - 24069.585: 89.3775% ( 130) 00:12:32.224 24069.585 - 24188.742: 90.8392% ( 116) 00:12:32.224 24188.742 - 24307.898: 91.9103% ( 85) 00:12:32.224 24307.898 - 24427.055: 92.8679% ( 76) 00:12:32.224 24427.055 - 24546.211: 93.6618% ( 63) 00:12:32.224 24546.211 - 24665.367: 94.4934% ( 66) 00:12:32.224 24665.367 - 24784.524: 95.0605% ( 45) 00:12:32.224 24784.524 - 24903.680: 95.6023% ( 43) 00:12:32.224 24903.680 - 25022.836: 96.0811% ( 38) 00:12:32.224 25022.836 - 25141.993: 96.4340% ( 28) 00:12:32.224 25141.993 - 25261.149: 96.8750% ( 35) 00:12:32.224 25261.149 - 25380.305: 97.1522% ( 22) 00:12:32.225 25380.305 - 25499.462: 97.3790% ( 18) 00:12:32.225 25499.462 - 25618.618: 97.5680% ( 15) 00:12:32.225 25618.618 - 25737.775: 97.7571% ( 15) 00:12:32.225 25737.775 - 25856.931: 97.9083% ( 12) 00:12:32.225 25856.931 - 25976.087: 98.0343% ( 10) 00:12:32.225 25976.087 - 26095.244: 98.0721% ( 3) 00:12:32.225 26095.244 - 26214.400: 98.1477% ( 6) 00:12:32.225 26214.400 - 26333.556: 98.2233% ( 6) 00:12:32.225 26333.556 - 26452.713: 98.2611% ( 3) 00:12:32.225 26452.713 - 26571.869: 98.3115% ( 4) 00:12:32.225 26571.869 - 26691.025: 98.3493% ( 3) 00:12:32.225 26691.025 - 26810.182: 98.3745% ( 2) 00:12:32.225 26810.182 - 26929.338: 98.3871% ( 1) 00:12:32.225 30027.404 - 30146.560: 98.4123% ( 2) 00:12:32.225 30146.560 - 30265.716: 98.4501% ( 3) 00:12:32.225 30265.716 - 30384.873: 98.4879% ( 3) 00:12:32.225 30384.873 - 30504.029: 98.5383% ( 4) 00:12:32.225 30504.029 - 30742.342: 98.6265% ( 7) 00:12:32.225 30742.342 - 30980.655: 98.7273% ( 8) 00:12:32.225 30980.655 - 31218.967: 98.8281% ( 8) 00:12:32.225 31218.967 - 31457.280: 98.9037% ( 6) 00:12:32.225 31457.280 - 31695.593: 99.0045% ( 8) 00:12:32.225 31695.593 - 31933.905: 99.0927% ( 7) 00:12:32.225 31933.905 - 32172.218: 99.1935% ( 8) 00:12:32.225 37891.724 - 38130.036: 99.2061% ( 1) 00:12:32.225 38130.036 - 38368.349: 99.2944% ( 7) 00:12:32.225 38368.349 - 38606.662: 99.3952% ( 8) 00:12:32.225 38606.662 - 38844.975: 99.4708% ( 6) 00:12:32.225 38844.975 - 39083.287: 99.5590% ( 7) 00:12:32.225 39083.287 - 39321.600: 99.6472% ( 7) 00:12:32.225 39321.600 - 39559.913: 99.7354% ( 7) 00:12:32.225 39559.913 - 39798.225: 99.8362% ( 8) 00:12:32.225 39798.225 - 40036.538: 99.9118% ( 6) 00:12:32.225 40036.538 - 40274.851: 100.0000% ( 7) 00:12:32.225 00:12:32.225 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:32.225 ============================================================================== 00:12:32.225 Range in us Cumulative IO count 00:12:32.225 10843.229 - 10902.807: 0.0625% ( 5) 00:12:32.225 10902.807 - 10962.385: 0.1000% ( 3) 00:12:32.225 10962.385 - 11021.964: 0.2125% ( 9) 00:12:32.225 11021.964 - 11081.542: 0.4125% ( 16) 00:12:32.225 11081.542 - 11141.120: 0.6500% ( 19) 00:12:32.225 11141.120 - 11200.698: 1.1125% ( 37) 00:12:32.225 11200.698 - 11260.276: 1.5500% ( 35) 00:12:32.225 11260.276 - 11319.855: 2.4250% ( 70) 00:12:32.225 11319.855 - 11379.433: 3.1375% ( 57) 00:12:32.225 11379.433 - 11439.011: 4.2625% ( 90) 00:12:32.225 11439.011 - 11498.589: 5.5500% ( 103) 00:12:32.225 11498.589 - 11558.167: 7.0375% ( 119) 00:12:32.225 11558.167 - 11617.745: 8.7250% ( 135) 00:12:32.225 11617.745 - 11677.324: 9.8625% ( 91) 00:12:32.225 11677.324 - 11736.902: 11.2125% ( 108) 00:12:32.225 11736.902 - 11796.480: 13.0750% ( 149) 00:12:32.225 11796.480 - 11856.058: 15.1000% ( 162) 00:12:32.225 11856.058 - 11915.636: 17.1500% ( 164) 00:12:32.225 11915.636 - 11975.215: 19.5125% ( 189) 00:12:32.225 11975.215 - 12034.793: 22.3750% ( 229) 00:12:32.225 12034.793 - 12094.371: 25.2250% ( 228) 00:12:32.225 12094.371 - 12153.949: 28.0125% ( 223) 00:12:32.225 12153.949 - 12213.527: 30.2125% ( 176) 00:12:32.225 12213.527 - 12273.105: 32.3875% ( 174) 00:12:32.225 12273.105 - 12332.684: 34.3750% ( 159) 00:12:32.225 12332.684 - 12392.262: 35.7625% ( 111) 00:12:32.225 12392.262 - 12451.840: 37.1625% ( 112) 00:12:32.225 12451.840 - 12511.418: 38.4375% ( 102) 00:12:32.225 12511.418 - 12570.996: 39.5500% ( 89) 00:12:32.225 12570.996 - 12630.575: 40.7625% ( 97) 00:12:32.225 12630.575 - 12690.153: 41.6875% ( 74) 00:12:32.225 12690.153 - 12749.731: 42.4375% ( 60) 00:12:32.225 12749.731 - 12809.309: 43.2250% ( 63) 00:12:32.225 12809.309 - 12868.887: 44.0375% ( 65) 00:12:32.225 12868.887 - 12928.465: 44.8875% ( 68) 00:12:32.225 12928.465 - 12988.044: 45.7500% ( 69) 00:12:32.225 12988.044 - 13047.622: 46.6750% ( 74) 00:12:32.225 13047.622 - 13107.200: 47.6625% ( 79) 00:12:32.225 13107.200 - 13166.778: 48.7750% ( 89) 00:12:32.225 13166.778 - 13226.356: 49.9125% ( 91) 00:12:32.225 13226.356 - 13285.935: 51.0250% ( 89) 00:12:32.225 13285.935 - 13345.513: 52.1625% ( 91) 00:12:32.225 13345.513 - 13405.091: 53.3125% ( 92) 00:12:32.225 13405.091 - 13464.669: 54.6375% ( 106) 00:12:32.225 13464.669 - 13524.247: 55.7250% ( 87) 00:12:32.225 13524.247 - 13583.825: 56.8500% ( 90) 00:12:32.225 13583.825 - 13643.404: 58.1000% ( 100) 00:12:32.225 13643.404 - 13702.982: 59.1000% ( 80) 00:12:32.225 13702.982 - 13762.560: 59.9625% ( 69) 00:12:32.225 13762.560 - 13822.138: 60.8500% ( 71) 00:12:32.225 13822.138 - 13881.716: 61.7250% ( 70) 00:12:32.225 13881.716 - 13941.295: 62.3375% ( 49) 00:12:32.225 13941.295 - 14000.873: 62.9500% ( 49) 00:12:32.225 14000.873 - 14060.451: 63.5250% ( 46) 00:12:32.225 14060.451 - 14120.029: 64.1000% ( 46) 00:12:32.225 14120.029 - 14179.607: 64.5250% ( 34) 00:12:32.225 14179.607 - 14239.185: 64.9000% ( 30) 00:12:32.225 14239.185 - 14298.764: 65.2125% ( 25) 00:12:32.225 14298.764 - 14358.342: 65.4875% ( 22) 00:12:32.225 14358.342 - 14417.920: 65.7500% ( 21) 00:12:32.225 14417.920 - 14477.498: 65.9000% ( 12) 00:12:32.225 14477.498 - 14537.076: 66.0625% ( 13) 00:12:32.225 14537.076 - 14596.655: 66.2250% ( 13) 00:12:32.225 14596.655 - 14656.233: 66.3375% ( 9) 00:12:32.225 14656.233 - 14715.811: 66.4375% ( 8) 00:12:32.225 14715.811 - 14775.389: 66.5625% ( 10) 00:12:32.225 14775.389 - 14834.967: 66.6750% ( 9) 00:12:32.225 14834.967 - 14894.545: 66.8375% ( 13) 00:12:32.225 14894.545 - 14954.124: 66.9500% ( 9) 00:12:32.225 14954.124 - 15013.702: 67.0750% ( 10) 00:12:32.225 15013.702 - 15073.280: 67.1750% ( 8) 00:12:32.225 15073.280 - 15132.858: 67.2750% ( 8) 00:12:32.225 15132.858 - 15192.436: 67.3125% ( 3) 00:12:32.225 15192.436 - 15252.015: 67.4125% ( 8) 00:12:32.225 15252.015 - 15371.171: 67.6375% ( 18) 00:12:32.225 15371.171 - 15490.327: 67.8000% ( 13) 00:12:32.225 15490.327 - 15609.484: 67.9625% ( 13) 00:12:32.225 15609.484 - 15728.640: 68.0750% ( 9) 00:12:32.225 15728.640 - 15847.796: 68.1375% ( 5) 00:12:32.225 15847.796 - 15966.953: 68.2000% ( 5) 00:12:32.225 15966.953 - 16086.109: 68.3250% ( 10) 00:12:32.225 16086.109 - 16205.265: 68.5000% ( 14) 00:12:32.225 16205.265 - 16324.422: 68.5500% ( 4) 00:12:32.225 16324.422 - 16443.578: 68.6375% ( 7) 00:12:32.225 16443.578 - 16562.735: 68.7250% ( 7) 00:12:32.225 16562.735 - 16681.891: 68.8250% ( 8) 00:12:32.225 16681.891 - 16801.047: 68.9125% ( 7) 00:12:32.225 16801.047 - 16920.204: 68.9875% ( 6) 00:12:32.225 16920.204 - 17039.360: 69.0875% ( 8) 00:12:32.225 17039.360 - 17158.516: 69.1625% ( 6) 00:12:32.225 17158.516 - 17277.673: 69.3250% ( 13) 00:12:32.225 17277.673 - 17396.829: 69.4875% ( 13) 00:12:32.225 17396.829 - 17515.985: 69.5500% ( 5) 00:12:32.225 17515.985 - 17635.142: 69.5875% ( 3) 00:12:32.225 17635.142 - 17754.298: 69.6000% ( 1) 00:12:32.225 18469.236 - 18588.393: 69.6500% ( 4) 00:12:32.225 18588.393 - 18707.549: 69.7000% ( 4) 00:12:32.225 18707.549 - 18826.705: 69.7500% ( 4) 00:12:32.225 18826.705 - 18945.862: 69.8000% ( 4) 00:12:32.225 18945.862 - 19065.018: 69.8500% ( 4) 00:12:32.225 19065.018 - 19184.175: 69.8875% ( 3) 00:12:32.225 19184.175 - 19303.331: 69.9375% ( 4) 00:12:32.225 19303.331 - 19422.487: 69.9875% ( 4) 00:12:32.225 19422.487 - 19541.644: 70.0250% ( 3) 00:12:32.225 19541.644 - 19660.800: 70.0750% ( 4) 00:12:32.225 19660.800 - 19779.956: 70.2500% ( 14) 00:12:32.225 19779.956 - 19899.113: 70.4000% ( 12) 00:12:32.226 19899.113 - 20018.269: 70.5000% ( 8) 00:12:32.226 20018.269 - 20137.425: 70.6500% ( 12) 00:12:32.226 20137.425 - 20256.582: 70.9125% ( 21) 00:12:32.226 20256.582 - 20375.738: 71.0000% ( 7) 00:12:32.226 20375.738 - 20494.895: 71.1000% ( 8) 00:12:32.226 20494.895 - 20614.051: 71.2375% ( 11) 00:12:32.226 20614.051 - 20733.207: 71.5750% ( 27) 00:12:32.226 20733.207 - 20852.364: 71.9000% ( 26) 00:12:32.226 20852.364 - 20971.520: 72.1750% ( 22) 00:12:32.226 20971.520 - 21090.676: 72.3875% ( 17) 00:12:32.226 21090.676 - 21209.833: 72.5125% ( 10) 00:12:32.226 21209.833 - 21328.989: 72.5875% ( 6) 00:12:32.226 21328.989 - 21448.145: 72.6500% ( 5) 00:12:32.226 21448.145 - 21567.302: 72.6750% ( 2) 00:12:32.226 21567.302 - 21686.458: 72.7000% ( 2) 00:12:32.226 21686.458 - 21805.615: 72.7375% ( 3) 00:12:32.226 21805.615 - 21924.771: 72.8000% ( 5) 00:12:32.226 21924.771 - 22043.927: 72.8875% ( 7) 00:12:32.226 22043.927 - 22163.084: 73.0375% ( 12) 00:12:32.226 22163.084 - 22282.240: 73.1875% ( 12) 00:12:32.226 22282.240 - 22401.396: 73.3500% ( 13) 00:12:32.226 22401.396 - 22520.553: 73.5250% ( 14) 00:12:32.226 22520.553 - 22639.709: 73.7750% ( 20) 00:12:32.226 22639.709 - 22758.865: 74.1625% ( 31) 00:12:32.226 22758.865 - 22878.022: 74.7500% ( 47) 00:12:32.226 22878.022 - 22997.178: 75.6750% ( 74) 00:12:32.226 22997.178 - 23116.335: 76.5250% ( 68) 00:12:32.226 23116.335 - 23235.491: 77.4500% ( 74) 00:12:32.226 23235.491 - 23354.647: 78.5500% ( 88) 00:12:32.226 23354.647 - 23473.804: 79.9375% ( 111) 00:12:32.226 23473.804 - 23592.960: 82.0375% ( 168) 00:12:32.226 23592.960 - 23712.116: 84.3125% ( 182) 00:12:32.226 23712.116 - 23831.273: 86.4375% ( 170) 00:12:32.226 23831.273 - 23950.429: 88.5625% ( 170) 00:12:32.226 23950.429 - 24069.585: 90.0125% ( 116) 00:12:32.226 24069.585 - 24188.742: 91.2000% ( 95) 00:12:32.226 24188.742 - 24307.898: 92.1375% ( 75) 00:12:32.226 24307.898 - 24427.055: 93.1875% ( 84) 00:12:32.226 24427.055 - 24546.211: 93.9375% ( 60) 00:12:32.226 24546.211 - 24665.367: 95.1125% ( 94) 00:12:32.226 24665.367 - 24784.524: 95.9000% ( 63) 00:12:32.226 24784.524 - 24903.680: 96.4500% ( 44) 00:12:32.226 24903.680 - 25022.836: 96.8875% ( 35) 00:12:32.226 25022.836 - 25141.993: 97.2875% ( 32) 00:12:32.226 25141.993 - 25261.149: 97.6125% ( 26) 00:12:32.226 25261.149 - 25380.305: 97.8625% ( 20) 00:12:32.226 25380.305 - 25499.462: 98.1250% ( 21) 00:12:32.226 25499.462 - 25618.618: 98.4125% ( 23) 00:12:32.226 25618.618 - 25737.775: 98.6125% ( 16) 00:12:32.226 25737.775 - 25856.931: 98.7375% ( 10) 00:12:32.226 25856.931 - 25976.087: 98.8750% ( 11) 00:12:32.226 25976.087 - 26095.244: 98.9625% ( 7) 00:12:32.226 26095.244 - 26214.400: 99.0500% ( 7) 00:12:32.226 26214.400 - 26333.556: 99.1000% ( 4) 00:12:32.226 26333.556 - 26452.713: 99.1500% ( 4) 00:12:32.226 26452.713 - 26571.869: 99.1875% ( 3) 00:12:32.226 26571.869 - 26691.025: 99.2000% ( 1) 00:12:32.226 29669.935 - 29789.091: 99.2375% ( 3) 00:12:32.226 29789.091 - 29908.247: 99.2750% ( 3) 00:12:32.226 29908.247 - 30027.404: 99.3250% ( 4) 00:12:32.226 30027.404 - 30146.560: 99.3750% ( 4) 00:12:32.226 30146.560 - 30265.716: 99.4125% ( 3) 00:12:32.226 30265.716 - 30384.873: 99.4500% ( 3) 00:12:32.226 30384.873 - 30504.029: 99.5000% ( 4) 00:12:32.226 30504.029 - 30742.342: 99.5875% ( 7) 00:12:32.226 30742.342 - 30980.655: 99.6625% ( 6) 00:12:32.226 30980.655 - 31218.967: 99.7625% ( 8) 00:12:32.226 31218.967 - 31457.280: 99.8500% ( 7) 00:12:32.226 31457.280 - 31695.593: 99.9375% ( 7) 00:12:32.226 31695.593 - 31933.905: 100.0000% ( 5) 00:12:32.226 00:12:32.226 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:32.226 ============================================================================== 00:12:32.226 Range in us Cumulative IO count 00:12:32.226 10843.229 - 10902.807: 0.0250% ( 2) 00:12:32.226 10902.807 - 10962.385: 0.1000% ( 6) 00:12:32.226 10962.385 - 11021.964: 0.2000% ( 8) 00:12:32.226 11021.964 - 11081.542: 0.3375% ( 11) 00:12:32.226 11081.542 - 11141.120: 0.5125% ( 14) 00:12:32.226 11141.120 - 11200.698: 0.8000% ( 23) 00:12:32.226 11200.698 - 11260.276: 1.4000% ( 48) 00:12:32.226 11260.276 - 11319.855: 2.2625% ( 69) 00:12:32.226 11319.855 - 11379.433: 2.9500% ( 55) 00:12:32.226 11379.433 - 11439.011: 3.6000% ( 52) 00:12:32.226 11439.011 - 11498.589: 4.8500% ( 100) 00:12:32.226 11498.589 - 11558.167: 6.5250% ( 134) 00:12:32.226 11558.167 - 11617.745: 8.0000% ( 118) 00:12:32.226 11617.745 - 11677.324: 10.1375% ( 171) 00:12:32.226 11677.324 - 11736.902: 11.7000% ( 125) 00:12:32.226 11736.902 - 11796.480: 13.4000% ( 136) 00:12:32.226 11796.480 - 11856.058: 15.6250% ( 178) 00:12:32.226 11856.058 - 11915.636: 18.4000% ( 222) 00:12:32.226 11915.636 - 11975.215: 20.2875% ( 151) 00:12:32.226 11975.215 - 12034.793: 22.7000% ( 193) 00:12:32.226 12034.793 - 12094.371: 25.0625% ( 189) 00:12:32.226 12094.371 - 12153.949: 26.9000% ( 147) 00:12:32.226 12153.949 - 12213.527: 28.7625% ( 149) 00:12:32.226 12213.527 - 12273.105: 30.8000% ( 163) 00:12:32.226 12273.105 - 12332.684: 32.6375% ( 147) 00:12:32.226 12332.684 - 12392.262: 34.3750% ( 139) 00:12:32.226 12392.262 - 12451.840: 36.1625% ( 143) 00:12:32.226 12451.840 - 12511.418: 37.7500% ( 127) 00:12:32.226 12511.418 - 12570.996: 38.9875% ( 99) 00:12:32.226 12570.996 - 12630.575: 40.2125% ( 98) 00:12:32.226 12630.575 - 12690.153: 41.9500% ( 139) 00:12:32.226 12690.153 - 12749.731: 43.2375% ( 103) 00:12:32.226 12749.731 - 12809.309: 44.3250% ( 87) 00:12:32.226 12809.309 - 12868.887: 45.0375% ( 57) 00:12:32.226 12868.887 - 12928.465: 45.8000% ( 61) 00:12:32.226 12928.465 - 12988.044: 46.9250% ( 90) 00:12:32.226 12988.044 - 13047.622: 47.7750% ( 68) 00:12:32.226 13047.622 - 13107.200: 48.8375% ( 85) 00:12:32.226 13107.200 - 13166.778: 49.8875% ( 84) 00:12:32.226 13166.778 - 13226.356: 51.0375% ( 92) 00:12:32.226 13226.356 - 13285.935: 52.0625% ( 82) 00:12:32.226 13285.935 - 13345.513: 53.1375% ( 86) 00:12:32.226 13345.513 - 13405.091: 54.2250% ( 87) 00:12:32.226 13405.091 - 13464.669: 55.5125% ( 103) 00:12:32.226 13464.669 - 13524.247: 56.7000% ( 95) 00:12:32.226 13524.247 - 13583.825: 57.7125% ( 81) 00:12:32.226 13583.825 - 13643.404: 58.5875% ( 70) 00:12:32.226 13643.404 - 13702.982: 59.4000% ( 65) 00:12:32.226 13702.982 - 13762.560: 60.2375% ( 67) 00:12:32.226 13762.560 - 13822.138: 60.9500% ( 57) 00:12:32.226 13822.138 - 13881.716: 61.5500% ( 48) 00:12:32.226 13881.716 - 13941.295: 62.0250% ( 38) 00:12:32.226 13941.295 - 14000.873: 62.5000% ( 38) 00:12:32.226 14000.873 - 14060.451: 62.9000% ( 32) 00:12:32.226 14060.451 - 14120.029: 63.3125% ( 33) 00:12:32.226 14120.029 - 14179.607: 63.6750% ( 29) 00:12:32.226 14179.607 - 14239.185: 64.0625% ( 31) 00:12:32.226 14239.185 - 14298.764: 64.3750% ( 25) 00:12:32.226 14298.764 - 14358.342: 64.6375% ( 21) 00:12:32.226 14358.342 - 14417.920: 64.8125% ( 14) 00:12:32.226 14417.920 - 14477.498: 65.0500% ( 19) 00:12:32.226 14477.498 - 14537.076: 65.2250% ( 14) 00:12:32.226 14537.076 - 14596.655: 65.4125% ( 15) 00:12:32.226 14596.655 - 14656.233: 65.5250% ( 9) 00:12:32.226 14656.233 - 14715.811: 65.6875% ( 13) 00:12:32.226 14715.811 - 14775.389: 65.8750% ( 15) 00:12:32.226 14775.389 - 14834.967: 66.0500% ( 14) 00:12:32.226 14834.967 - 14894.545: 66.1875% ( 11) 00:12:32.226 14894.545 - 14954.124: 66.3250% ( 11) 00:12:32.226 14954.124 - 15013.702: 66.4125% ( 7) 00:12:32.226 15013.702 - 15073.280: 66.6000% ( 15) 00:12:32.226 15073.280 - 15132.858: 66.7875% ( 15) 00:12:32.226 15132.858 - 15192.436: 66.9875% ( 16) 00:12:32.226 15192.436 - 15252.015: 67.0750% ( 7) 00:12:32.226 15252.015 - 15371.171: 67.2000% ( 10) 00:12:32.226 15371.171 - 15490.327: 67.3500% ( 12) 00:12:32.226 15490.327 - 15609.484: 67.5000% ( 12) 00:12:32.226 15609.484 - 15728.640: 67.6000% ( 8) 00:12:32.226 15728.640 - 15847.796: 67.7000% ( 8) 00:12:32.226 15847.796 - 15966.953: 67.8375% ( 11) 00:12:32.226 15966.953 - 16086.109: 67.9375% ( 8) 00:12:32.226 16086.109 - 16205.265: 68.0000% ( 5) 00:12:32.226 16205.265 - 16324.422: 68.0625% ( 5) 00:12:32.226 16324.422 - 16443.578: 68.2000% ( 11) 00:12:32.226 16443.578 - 16562.735: 68.4000% ( 16) 00:12:32.226 16562.735 - 16681.891: 68.6125% ( 17) 00:12:32.226 16681.891 - 16801.047: 68.7625% ( 12) 00:12:32.226 16801.047 - 16920.204: 68.8500% ( 7) 00:12:32.226 16920.204 - 17039.360: 68.9250% ( 6) 00:12:32.226 17039.360 - 17158.516: 68.9750% ( 4) 00:12:32.226 17158.516 - 17277.673: 69.0375% ( 5) 00:12:32.226 17277.673 - 17396.829: 69.0875% ( 4) 00:12:32.226 17396.829 - 17515.985: 69.1500% ( 5) 00:12:32.226 17515.985 - 17635.142: 69.2375% ( 7) 00:12:32.226 17635.142 - 17754.298: 69.4125% ( 14) 00:12:32.226 17754.298 - 17873.455: 69.5500% ( 11) 00:12:32.226 17873.455 - 17992.611: 69.5875% ( 3) 00:12:32.226 17992.611 - 18111.767: 69.6000% ( 1) 00:12:32.226 18588.393 - 18707.549: 69.6250% ( 2) 00:12:32.226 18707.549 - 18826.705: 69.6625% ( 3) 00:12:32.226 18826.705 - 18945.862: 69.7500% ( 7) 00:12:32.226 18945.862 - 19065.018: 69.9250% ( 14) 00:12:32.226 19065.018 - 19184.175: 70.0250% ( 8) 00:12:32.226 19184.175 - 19303.331: 70.1500% ( 10) 00:12:32.226 19303.331 - 19422.487: 70.2875% ( 11) 00:12:32.226 19422.487 - 19541.644: 70.4750% ( 15) 00:12:32.226 19541.644 - 19660.800: 70.6000% ( 10) 00:12:32.226 19660.800 - 19779.956: 70.7375% ( 11) 00:12:32.226 19779.956 - 19899.113: 70.8875% ( 12) 00:12:32.226 19899.113 - 20018.269: 71.1875% ( 24) 00:12:32.226 20018.269 - 20137.425: 71.5750% ( 31) 00:12:32.227 20137.425 - 20256.582: 71.8375% ( 21) 00:12:32.227 20256.582 - 20375.738: 71.9875% ( 12) 00:12:32.227 20375.738 - 20494.895: 72.1375% ( 12) 00:12:32.227 20494.895 - 20614.051: 72.2875% ( 12) 00:12:32.227 20614.051 - 20733.207: 72.4625% ( 14) 00:12:32.227 20733.207 - 20852.364: 72.6000% ( 11) 00:12:32.227 20852.364 - 20971.520: 72.7250% ( 10) 00:12:32.227 20971.520 - 21090.676: 72.8250% ( 8) 00:12:32.227 21090.676 - 21209.833: 72.9750% ( 12) 00:12:32.227 21209.833 - 21328.989: 73.1000% ( 10) 00:12:32.227 21328.989 - 21448.145: 73.1625% ( 5) 00:12:32.227 21448.145 - 21567.302: 73.2375% ( 6) 00:12:32.227 21567.302 - 21686.458: 73.3000% ( 5) 00:12:32.227 21686.458 - 21805.615: 73.4250% ( 10) 00:12:32.227 21805.615 - 21924.771: 73.5000% ( 6) 00:12:32.227 21924.771 - 22043.927: 73.6000% ( 8) 00:12:32.227 22043.927 - 22163.084: 73.6875% ( 7) 00:12:32.227 22163.084 - 22282.240: 73.7250% ( 3) 00:12:32.227 22282.240 - 22401.396: 73.7625% ( 3) 00:12:32.227 22401.396 - 22520.553: 73.8375% ( 6) 00:12:32.227 22520.553 - 22639.709: 73.9625% ( 10) 00:12:32.227 22639.709 - 22758.865: 74.4875% ( 42) 00:12:32.227 22758.865 - 22878.022: 75.1250% ( 51) 00:12:32.227 22878.022 - 22997.178: 75.8375% ( 57) 00:12:32.227 22997.178 - 23116.335: 76.9750% ( 91) 00:12:32.227 23116.335 - 23235.491: 78.2125% ( 99) 00:12:32.227 23235.491 - 23354.647: 79.2375% ( 82) 00:12:32.227 23354.647 - 23473.804: 80.3750% ( 91) 00:12:32.227 23473.804 - 23592.960: 82.4125% ( 163) 00:12:32.227 23592.960 - 23712.116: 83.8000% ( 111) 00:12:32.227 23712.116 - 23831.273: 85.6125% ( 145) 00:12:32.227 23831.273 - 23950.429: 87.5125% ( 152) 00:12:32.227 23950.429 - 24069.585: 89.1125% ( 128) 00:12:32.227 24069.585 - 24188.742: 90.4000% ( 103) 00:12:32.227 24188.742 - 24307.898: 91.4500% ( 84) 00:12:32.227 24307.898 - 24427.055: 92.5250% ( 86) 00:12:32.227 24427.055 - 24546.211: 94.2750% ( 140) 00:12:32.227 24546.211 - 24665.367: 95.2375% ( 77) 00:12:32.227 24665.367 - 24784.524: 95.9750% ( 59) 00:12:32.227 24784.524 - 24903.680: 96.5250% ( 44) 00:12:32.227 24903.680 - 25022.836: 96.9625% ( 35) 00:12:32.227 25022.836 - 25141.993: 97.3625% ( 32) 00:12:32.227 25141.993 - 25261.149: 97.6875% ( 26) 00:12:32.227 25261.149 - 25380.305: 97.9750% ( 23) 00:12:32.227 25380.305 - 25499.462: 98.2750% ( 24) 00:12:32.227 25499.462 - 25618.618: 98.5250% ( 20) 00:12:32.227 25618.618 - 25737.775: 98.7250% ( 16) 00:12:32.227 25737.775 - 25856.931: 98.8750% ( 12) 00:12:32.227 25856.931 - 25976.087: 98.9750% ( 8) 00:12:32.227 25976.087 - 26095.244: 99.1000% ( 10) 00:12:32.227 26095.244 - 26214.400: 99.1750% ( 6) 00:12:32.227 26214.400 - 26333.556: 99.1875% ( 1) 00:12:32.227 26333.556 - 26452.713: 99.2000% ( 1) 00:12:32.227 27048.495 - 27167.651: 99.2250% ( 2) 00:12:32.227 27167.651 - 27286.807: 99.2625% ( 3) 00:12:32.227 27286.807 - 27405.964: 99.3125% ( 4) 00:12:32.227 27405.964 - 27525.120: 99.3500% ( 3) 00:12:32.227 27525.120 - 27644.276: 99.4000% ( 4) 00:12:32.227 27644.276 - 27763.433: 99.4375% ( 3) 00:12:32.227 27763.433 - 27882.589: 99.4875% ( 4) 00:12:32.227 27882.589 - 28001.745: 99.5250% ( 3) 00:12:32.227 28001.745 - 28120.902: 99.5750% ( 4) 00:12:32.227 28120.902 - 28240.058: 99.6125% ( 3) 00:12:32.227 28240.058 - 28359.215: 99.6625% ( 4) 00:12:32.227 28359.215 - 28478.371: 99.7000% ( 3) 00:12:32.227 28478.371 - 28597.527: 99.7500% ( 4) 00:12:32.227 28597.527 - 28716.684: 99.8000% ( 4) 00:12:32.227 28716.684 - 28835.840: 99.8375% ( 3) 00:12:32.227 28835.840 - 28954.996: 99.8875% ( 4) 00:12:32.227 28954.996 - 29074.153: 99.9375% ( 4) 00:12:32.227 29074.153 - 29193.309: 99.9875% ( 4) 00:12:32.227 29193.309 - 29312.465: 100.0000% ( 1) 00:12:32.227 00:12:32.227 17:03:24 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:32.227 00:12:32.227 real 0m2.682s 00:12:32.227 user 0m2.279s 00:12:32.227 sys 0m0.290s 00:12:32.227 17:03:24 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.227 17:03:24 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:32.227 ************************************ 00:12:32.227 END TEST nvme_perf 00:12:32.227 ************************************ 00:12:32.227 17:03:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:32.227 17:03:24 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:32.227 17:03:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.227 17:03:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.227 ************************************ 00:12:32.227 START TEST nvme_hello_world 00:12:32.227 ************************************ 00:12:32.227 17:03:24 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:32.485 Initializing NVMe Controllers 00:12:32.485 Attached to 0000:00:10.0 00:12:32.485 Namespace ID: 1 size: 6GB 00:12:32.485 Attached to 0000:00:11.0 00:12:32.485 Namespace ID: 1 size: 5GB 00:12:32.485 Attached to 0000:00:13.0 00:12:32.485 Namespace ID: 1 size: 1GB 00:12:32.485 Attached to 0000:00:12.0 00:12:32.485 Namespace ID: 1 size: 4GB 00:12:32.485 Namespace ID: 2 size: 4GB 00:12:32.485 Namespace ID: 3 size: 4GB 00:12:32.485 Initialization complete. 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 INFO: using host memory buffer for IO 00:12:32.485 Hello world! 00:12:32.485 00:12:32.485 real 0m0.317s 00:12:32.485 user 0m0.109s 00:12:32.485 sys 0m0.159s 00:12:32.485 17:03:24 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.485 17:03:24 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:32.485 ************************************ 00:12:32.486 END TEST nvme_hello_world 00:12:32.486 ************************************ 00:12:32.486 17:03:24 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:32.486 17:03:24 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:32.486 17:03:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.486 17:03:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.757 ************************************ 00:12:32.757 START TEST nvme_sgl 00:12:32.757 ************************************ 00:12:32.757 17:03:24 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:32.757 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:32.757 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:32.757 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:33.039 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:33.039 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:33.039 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:33.039 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:33.039 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:33.039 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:33.040 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:33.040 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:33.040 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:33.040 NVMe Readv/Writev Request test 00:12:33.040 Attached to 0000:00:10.0 00:12:33.040 Attached to 0000:00:11.0 00:12:33.040 Attached to 0000:00:13.0 00:12:33.040 Attached to 0000:00:12.0 00:12:33.040 0000:00:10.0: build_io_request_2 test passed 00:12:33.040 0000:00:10.0: build_io_request_4 test passed 00:12:33.040 0000:00:10.0: build_io_request_5 test passed 00:12:33.040 0000:00:10.0: build_io_request_6 test passed 00:12:33.040 0000:00:10.0: build_io_request_7 test passed 00:12:33.040 0000:00:10.0: build_io_request_10 test passed 00:12:33.040 0000:00:11.0: build_io_request_2 test passed 00:12:33.040 0000:00:11.0: build_io_request_4 test passed 00:12:33.040 0000:00:11.0: build_io_request_5 test passed 00:12:33.040 0000:00:11.0: build_io_request_6 test passed 00:12:33.040 0000:00:11.0: build_io_request_7 test passed 00:12:33.040 0000:00:11.0: build_io_request_10 test passed 00:12:33.040 Cleaning up... 00:12:33.040 00:12:33.040 real 0m0.381s 00:12:33.040 user 0m0.195s 00:12:33.040 sys 0m0.141s 00:12:33.040 17:03:25 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.040 17:03:25 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 ************************************ 00:12:33.040 END TEST nvme_sgl 00:12:33.040 ************************************ 00:12:33.040 17:03:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:33.040 17:03:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:33.040 17:03:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.040 17:03:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.040 ************************************ 00:12:33.040 START TEST nvme_e2edp 00:12:33.040 ************************************ 00:12:33.040 17:03:25 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:33.298 NVMe Write/Read with End-to-End data protection test 00:12:33.298 Attached to 0000:00:10.0 00:12:33.299 Attached to 0000:00:11.0 00:12:33.299 Attached to 0000:00:13.0 00:12:33.299 Attached to 0000:00:12.0 00:12:33.299 Cleaning up... 00:12:33.299 ************************************ 00:12:33.299 END TEST nvme_e2edp 00:12:33.299 ************************************ 00:12:33.299 00:12:33.299 real 0m0.286s 00:12:33.299 user 0m0.112s 00:12:33.299 sys 0m0.131s 00:12:33.299 17:03:25 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.299 17:03:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:33.299 17:03:25 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:33.299 17:03:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:33.299 17:03:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.299 17:03:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.299 ************************************ 00:12:33.299 START TEST nvme_reserve 00:12:33.299 ************************************ 00:12:33.299 17:03:25 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:33.557 ===================================================== 00:12:33.557 NVMe Controller at PCI bus 0, device 16, function 0 00:12:33.557 ===================================================== 00:12:33.557 Reservations: Not Supported 00:12:33.557 ===================================================== 00:12:33.557 NVMe Controller at PCI bus 0, device 17, function 0 00:12:33.557 ===================================================== 00:12:33.557 Reservations: Not Supported 00:12:33.557 ===================================================== 00:12:33.557 NVMe Controller at PCI bus 0, device 19, function 0 00:12:33.557 ===================================================== 00:12:33.557 Reservations: Not Supported 00:12:33.557 ===================================================== 00:12:33.557 NVMe Controller at PCI bus 0, device 18, function 0 00:12:33.557 ===================================================== 00:12:33.557 Reservations: Not Supported 00:12:33.557 Reservation test passed 00:12:33.557 ************************************ 00:12:33.557 END TEST nvme_reserve 00:12:33.557 ************************************ 00:12:33.557 00:12:33.557 real 0m0.278s 00:12:33.557 user 0m0.101s 00:12:33.557 sys 0m0.133s 00:12:33.557 17:03:25 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.557 17:03:25 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 17:03:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:33.815 17:03:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:33.815 17:03:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.815 17:03:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.815 ************************************ 00:12:33.815 START TEST nvme_err_injection 00:12:33.815 ************************************ 00:12:33.815 17:03:26 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:34.072 NVMe Error Injection test 00:12:34.072 Attached to 0000:00:10.0 00:12:34.072 Attached to 0000:00:11.0 00:12:34.072 Attached to 0000:00:13.0 00:12:34.072 Attached to 0000:00:12.0 00:12:34.072 0000:00:10.0: get features failed as expected 00:12:34.072 0000:00:11.0: get features failed as expected 00:12:34.072 0000:00:13.0: get features failed as expected 00:12:34.072 0000:00:12.0: get features failed as expected 00:12:34.072 0000:00:13.0: get features successfully as expected 00:12:34.072 0000:00:12.0: get features successfully as expected 00:12:34.072 0000:00:10.0: get features successfully as expected 00:12:34.072 0000:00:11.0: get features successfully as expected 00:12:34.072 0000:00:10.0: read failed as expected 00:12:34.072 0000:00:11.0: read failed as expected 00:12:34.072 0000:00:13.0: read failed as expected 00:12:34.072 0000:00:12.0: read failed as expected 00:12:34.072 0000:00:10.0: read successfully as expected 00:12:34.072 0000:00:11.0: read successfully as expected 00:12:34.072 0000:00:13.0: read successfully as expected 00:12:34.072 0000:00:12.0: read successfully as expected 00:12:34.072 Cleaning up... 00:12:34.072 ************************************ 00:12:34.072 END TEST nvme_err_injection 00:12:34.072 ************************************ 00:12:34.072 00:12:34.072 real 0m0.318s 00:12:34.072 user 0m0.128s 00:12:34.072 sys 0m0.148s 00:12:34.072 17:03:26 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.072 17:03:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:34.072 17:03:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:34.072 17:03:26 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:12:34.072 17:03:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.072 17:03:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.072 ************************************ 00:12:34.072 START TEST nvme_overhead 00:12:34.073 ************************************ 00:12:34.073 17:03:26 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:35.445 Initializing NVMe Controllers 00:12:35.445 Attached to 0000:00:10.0 00:12:35.445 Attached to 0000:00:11.0 00:12:35.445 Attached to 0000:00:13.0 00:12:35.445 Attached to 0000:00:12.0 00:12:35.445 Initialization complete. Launching workers. 00:12:35.445 submit (in ns) avg, min, max = 17173.6, 13750.0, 102214.1 00:12:35.445 complete (in ns) avg, min, max = 11731.3, 9488.6, 1298034.1 00:12:35.445 00:12:35.445 Submit histogram 00:12:35.445 ================ 00:12:35.445 Range in us Cumulative Count 00:12:35.445 13.731 - 13.789: 0.0196% ( 2) 00:12:35.445 13.789 - 13.847: 0.1372% ( 12) 00:12:35.445 13.847 - 13.905: 0.3136% ( 18) 00:12:35.445 13.905 - 13.964: 0.7153% ( 41) 00:12:35.446 13.964 - 14.022: 1.2445% ( 54) 00:12:35.446 14.022 - 14.080: 1.7638% ( 53) 00:12:35.446 14.080 - 14.138: 2.1754% ( 42) 00:12:35.446 14.138 - 14.196: 2.5380% ( 37) 00:12:35.446 14.196 - 14.255: 2.9005% ( 37) 00:12:35.446 14.255 - 14.313: 3.1455% ( 25) 00:12:35.446 14.313 - 14.371: 3.2925% ( 15) 00:12:35.446 14.371 - 14.429: 3.4983% ( 21) 00:12:35.446 14.429 - 14.487: 3.5865% ( 9) 00:12:35.446 14.487 - 14.545: 3.6649% ( 8) 00:12:35.446 14.545 - 14.604: 3.7335% ( 7) 00:12:35.446 14.604 - 14.662: 3.7923% ( 6) 00:12:35.446 14.662 - 14.720: 3.8805% ( 9) 00:12:35.446 14.720 - 14.778: 3.9686% ( 9) 00:12:35.446 14.778 - 14.836: 4.1156% ( 15) 00:12:35.446 14.836 - 14.895: 4.3410% ( 23) 00:12:35.446 14.895 - 15.011: 6.6928% ( 240) 00:12:35.446 15.011 - 15.127: 14.4733% ( 794) 00:12:35.446 15.127 - 15.244: 26.6242% ( 1240) 00:12:35.446 15.244 - 15.360: 37.1485% ( 1074) 00:12:35.446 15.360 - 15.476: 44.8408% ( 785) 00:12:35.446 15.476 - 15.593: 50.2793% ( 555) 00:12:35.446 15.593 - 15.709: 53.9049% ( 370) 00:12:35.446 15.709 - 15.825: 56.1783% ( 232) 00:12:35.446 15.825 - 15.942: 57.7070% ( 156) 00:12:35.446 15.942 - 16.058: 58.9711% ( 129) 00:12:35.446 16.058 - 16.175: 60.4312% ( 149) 00:12:35.446 16.175 - 16.291: 62.1852% ( 179) 00:12:35.446 16.291 - 16.407: 64.1254% ( 198) 00:12:35.446 16.407 - 16.524: 65.4287% ( 133) 00:12:35.446 16.524 - 16.640: 66.7614% ( 136) 00:12:35.446 16.640 - 16.756: 67.9079% ( 117) 00:12:35.446 16.756 - 16.873: 68.8290% ( 94) 00:12:35.446 16.873 - 16.989: 69.5639% ( 75) 00:12:35.446 16.989 - 17.105: 70.1911% ( 64) 00:12:35.446 17.105 - 17.222: 70.5537% ( 37) 00:12:35.446 17.222 - 17.338: 70.8084% ( 26) 00:12:35.446 17.338 - 17.455: 70.9652% ( 16) 00:12:35.446 17.455 - 17.571: 71.1122% ( 15) 00:12:35.446 17.571 - 17.687: 71.2200% ( 11) 00:12:35.446 17.687 - 17.804: 71.3082% ( 9) 00:12:35.446 17.804 - 17.920: 71.3964% ( 9) 00:12:35.446 17.920 - 18.036: 71.5728% ( 18) 00:12:35.446 18.036 - 18.153: 72.3175% ( 76) 00:12:35.446 18.153 - 18.269: 74.4635% ( 219) 00:12:35.446 18.269 - 18.385: 77.6188% ( 322) 00:12:35.446 18.385 - 18.502: 79.8824% ( 231) 00:12:35.446 18.502 - 18.618: 81.6561% ( 181) 00:12:35.446 18.618 - 18.735: 82.8809% ( 125) 00:12:35.446 18.735 - 18.851: 83.7041% ( 84) 00:12:35.446 18.851 - 18.967: 84.2038% ( 51) 00:12:35.446 18.967 - 19.084: 84.5468% ( 35) 00:12:35.446 19.084 - 19.200: 84.8702% ( 33) 00:12:35.446 19.200 - 19.316: 85.3111% ( 45) 00:12:35.446 19.316 - 19.433: 85.7227% ( 42) 00:12:35.446 19.433 - 19.549: 86.0657% ( 35) 00:12:35.446 19.549 - 19.665: 86.4184% ( 36) 00:12:35.446 19.665 - 19.782: 86.7516% ( 34) 00:12:35.446 19.782 - 19.898: 87.1436% ( 40) 00:12:35.446 19.898 - 20.015: 87.3493% ( 21) 00:12:35.446 20.015 - 20.131: 87.5159% ( 17) 00:12:35.446 20.131 - 20.247: 87.7315% ( 22) 00:12:35.446 20.247 - 20.364: 87.8883% ( 16) 00:12:35.446 20.364 - 20.480: 88.0451% ( 16) 00:12:35.446 20.480 - 20.596: 88.1921% ( 15) 00:12:35.446 20.596 - 20.713: 88.3195% ( 13) 00:12:35.446 20.713 - 20.829: 88.4664% ( 15) 00:12:35.446 20.829 - 20.945: 88.5448% ( 8) 00:12:35.446 20.945 - 21.062: 88.6526% ( 11) 00:12:35.446 21.062 - 21.178: 88.7702% ( 12) 00:12:35.446 21.178 - 21.295: 88.8290% ( 6) 00:12:35.446 21.295 - 21.411: 88.9074% ( 8) 00:12:35.446 21.411 - 21.527: 88.9956% ( 9) 00:12:35.446 21.527 - 21.644: 89.1328% ( 14) 00:12:35.446 21.644 - 21.760: 89.2602% ( 13) 00:12:35.446 21.760 - 21.876: 89.5345% ( 28) 00:12:35.446 21.876 - 21.993: 89.8187% ( 29) 00:12:35.446 21.993 - 22.109: 90.2107% ( 40) 00:12:35.446 22.109 - 22.225: 90.6026% ( 40) 00:12:35.446 22.225 - 22.342: 90.9162% ( 32) 00:12:35.446 22.342 - 22.458: 91.3180% ( 41) 00:12:35.446 22.458 - 22.575: 91.5336% ( 22) 00:12:35.446 22.575 - 22.691: 91.7589% ( 23) 00:12:35.446 22.691 - 22.807: 92.0039% ( 25) 00:12:35.446 22.807 - 22.924: 92.2783% ( 28) 00:12:35.446 22.924 - 23.040: 92.5821% ( 31) 00:12:35.446 23.040 - 23.156: 92.8466% ( 27) 00:12:35.446 23.156 - 23.273: 93.2190% ( 38) 00:12:35.446 23.273 - 23.389: 93.6012% ( 39) 00:12:35.446 23.389 - 23.505: 93.8756% ( 28) 00:12:35.446 23.505 - 23.622: 94.1597% ( 29) 00:12:35.446 23.622 - 23.738: 94.3361% ( 18) 00:12:35.446 23.738 - 23.855: 94.6007% ( 27) 00:12:35.446 23.855 - 23.971: 94.8065% ( 21) 00:12:35.446 23.971 - 24.087: 95.0122% ( 21) 00:12:35.446 24.087 - 24.204: 95.1494% ( 14) 00:12:35.446 24.204 - 24.320: 95.2768% ( 13) 00:12:35.446 24.320 - 24.436: 95.4630% ( 19) 00:12:35.446 24.436 - 24.553: 95.5806% ( 12) 00:12:35.446 24.553 - 24.669: 95.7668% ( 19) 00:12:35.446 24.669 - 24.785: 95.9824% ( 22) 00:12:35.446 24.785 - 24.902: 96.1098% ( 13) 00:12:35.446 24.902 - 25.018: 96.2371% ( 13) 00:12:35.446 25.018 - 25.135: 96.3743% ( 14) 00:12:35.446 25.135 - 25.251: 96.5213% ( 15) 00:12:35.446 25.251 - 25.367: 96.5605% ( 4) 00:12:35.446 25.367 - 25.484: 96.6585% ( 10) 00:12:35.446 25.484 - 25.600: 96.7467% ( 9) 00:12:35.446 25.600 - 25.716: 96.8643% ( 12) 00:12:35.446 25.716 - 25.833: 96.9133% ( 5) 00:12:35.446 25.833 - 25.949: 96.9721% ( 6) 00:12:35.446 25.949 - 26.065: 97.0603% ( 9) 00:12:35.446 26.065 - 26.182: 97.1289% ( 7) 00:12:35.446 26.182 - 26.298: 97.1877% ( 6) 00:12:35.446 26.298 - 26.415: 97.2171% ( 3) 00:12:35.446 26.415 - 26.531: 97.2758% ( 6) 00:12:35.446 26.531 - 26.647: 97.3542% ( 8) 00:12:35.446 26.647 - 26.764: 97.4620% ( 11) 00:12:35.446 26.764 - 26.880: 97.5208% ( 6) 00:12:35.446 26.880 - 26.996: 97.5600% ( 4) 00:12:35.446 26.996 - 27.113: 97.6188% ( 6) 00:12:35.446 27.113 - 27.229: 97.7070% ( 9) 00:12:35.446 27.229 - 27.345: 97.7952% ( 9) 00:12:35.446 27.345 - 27.462: 97.8932% ( 10) 00:12:35.446 27.462 - 27.578: 97.9618% ( 7) 00:12:35.446 27.578 - 27.695: 98.0500% ( 9) 00:12:35.446 27.695 - 27.811: 98.0990% ( 5) 00:12:35.446 27.811 - 27.927: 98.1774% ( 8) 00:12:35.446 27.927 - 28.044: 98.2362% ( 6) 00:12:35.446 28.044 - 28.160: 98.2754% ( 4) 00:12:35.446 28.160 - 28.276: 98.3146% ( 4) 00:12:35.446 28.276 - 28.393: 98.3831% ( 7) 00:12:35.446 28.393 - 28.509: 98.4125% ( 3) 00:12:35.446 28.509 - 28.625: 98.4419% ( 3) 00:12:35.446 28.625 - 28.742: 98.4811% ( 4) 00:12:35.446 28.742 - 28.858: 98.5105% ( 3) 00:12:35.446 28.858 - 28.975: 98.5203% ( 1) 00:12:35.446 28.975 - 29.091: 98.5301% ( 1) 00:12:35.446 29.091 - 29.207: 98.5791% ( 5) 00:12:35.446 29.207 - 29.324: 98.5987% ( 2) 00:12:35.446 29.324 - 29.440: 98.6379% ( 4) 00:12:35.446 29.440 - 29.556: 98.6869% ( 5) 00:12:35.446 29.556 - 29.673: 98.7065% ( 2) 00:12:35.446 29.789 - 30.022: 98.8045% ( 10) 00:12:35.446 30.022 - 30.255: 98.8535% ( 5) 00:12:35.446 30.255 - 30.487: 98.9025% ( 5) 00:12:35.446 30.487 - 30.720: 98.9417% ( 4) 00:12:35.446 30.720 - 30.953: 99.0201% ( 8) 00:12:35.446 30.953 - 31.185: 99.0397% ( 2) 00:12:35.446 31.185 - 31.418: 99.0985% ( 6) 00:12:35.446 31.418 - 31.651: 99.1769% ( 8) 00:12:35.446 31.651 - 31.884: 99.2063% ( 3) 00:12:35.446 31.884 - 32.116: 99.2161% ( 1) 00:12:35.446 32.116 - 32.349: 99.2553% ( 4) 00:12:35.446 32.349 - 32.582: 99.2847% ( 3) 00:12:35.446 32.815 - 33.047: 99.3435% ( 6) 00:12:35.446 33.047 - 33.280: 99.3631% ( 2) 00:12:35.446 33.280 - 33.513: 99.3925% ( 3) 00:12:35.446 33.513 - 33.745: 99.4317% ( 4) 00:12:35.446 33.745 - 33.978: 99.4708% ( 4) 00:12:35.446 33.978 - 34.211: 99.4904% ( 2) 00:12:35.446 34.211 - 34.444: 99.5002% ( 1) 00:12:35.446 34.444 - 34.676: 99.5296% ( 3) 00:12:35.446 34.676 - 34.909: 99.5492% ( 2) 00:12:35.446 34.909 - 35.142: 99.6080% ( 6) 00:12:35.446 35.142 - 35.375: 99.6276% ( 2) 00:12:35.446 35.607 - 35.840: 99.6472% ( 2) 00:12:35.446 35.840 - 36.073: 99.6570% ( 1) 00:12:35.446 36.073 - 36.305: 99.6766% ( 2) 00:12:35.446 36.305 - 36.538: 99.6864% ( 1) 00:12:35.446 36.771 - 37.004: 99.6962% ( 1) 00:12:35.446 37.004 - 37.236: 99.7158% ( 2) 00:12:35.446 37.469 - 37.702: 99.7256% ( 1) 00:12:35.446 37.702 - 37.935: 99.7452% ( 2) 00:12:35.446 38.400 - 38.633: 99.7648% ( 2) 00:12:35.446 39.098 - 39.331: 99.7844% ( 2) 00:12:35.446 39.331 - 39.564: 99.7942% ( 1) 00:12:35.446 39.796 - 40.029: 99.8040% ( 1) 00:12:35.446 40.262 - 40.495: 99.8138% ( 1) 00:12:35.446 40.960 - 41.193: 99.8236% ( 1) 00:12:35.447 41.658 - 41.891: 99.8334% ( 1) 00:12:35.447 42.356 - 42.589: 99.8628% ( 3) 00:12:35.447 42.822 - 43.055: 99.8726% ( 1) 00:12:35.447 43.055 - 43.287: 99.8824% ( 1) 00:12:35.447 43.287 - 43.520: 99.8922% ( 1) 00:12:35.447 44.218 - 44.451: 99.9020% ( 1) 00:12:35.447 45.382 - 45.615: 99.9118% ( 1) 00:12:35.447 49.571 - 49.804: 99.9216% ( 1) 00:12:35.447 52.596 - 52.829: 99.9314% ( 1) 00:12:35.447 56.320 - 56.553: 99.9412% ( 1) 00:12:35.447 60.044 - 60.509: 99.9510% ( 1) 00:12:35.447 66.560 - 67.025: 99.9608% ( 1) 00:12:35.447 68.422 - 68.887: 99.9804% ( 2) 00:12:35.447 77.731 - 78.196: 99.9902% ( 1) 00:12:35.447 101.935 - 102.400: 100.0000% ( 1) 00:12:35.447 00:12:35.447 Complete histogram 00:12:35.447 ================== 00:12:35.447 Range in us Cumulative Count 00:12:35.447 9.484 - 9.542: 0.0490% ( 5) 00:12:35.447 9.542 - 9.600: 0.1862% ( 14) 00:12:35.447 9.600 - 9.658: 0.7447% ( 57) 00:12:35.447 9.658 - 9.716: 2.3910% ( 168) 00:12:35.447 9.716 - 9.775: 6.1538% ( 384) 00:12:35.447 9.775 - 9.833: 12.6017% ( 658) 00:12:35.447 9.833 - 9.891: 19.9118% ( 746) 00:12:35.447 9.891 - 9.949: 28.2607% ( 852) 00:12:35.447 9.949 - 10.007: 35.4728% ( 736) 00:12:35.447 10.007 - 10.065: 41.5385% ( 619) 00:12:35.447 10.065 - 10.124: 46.1832% ( 474) 00:12:35.447 10.124 - 10.182: 49.6227% ( 351) 00:12:35.447 10.182 - 10.240: 52.1509% ( 258) 00:12:35.447 10.240 - 10.298: 54.0519% ( 194) 00:12:35.447 10.298 - 10.356: 55.4728% ( 145) 00:12:35.447 10.356 - 10.415: 56.6291% ( 118) 00:12:35.447 10.415 - 10.473: 57.5502% ( 94) 00:12:35.447 10.473 - 10.531: 58.2656% ( 73) 00:12:35.447 10.531 - 10.589: 58.8731% ( 62) 00:12:35.447 10.589 - 10.647: 59.3631% ( 50) 00:12:35.447 10.647 - 10.705: 59.7354% ( 38) 00:12:35.447 10.705 - 10.764: 60.0784% ( 35) 00:12:35.447 10.764 - 10.822: 60.2646% ( 19) 00:12:35.447 10.822 - 10.880: 60.4998% ( 24) 00:12:35.447 10.880 - 10.938: 60.7741% ( 28) 00:12:35.447 10.938 - 10.996: 61.0583% ( 29) 00:12:35.447 10.996 - 11.055: 61.4993% ( 45) 00:12:35.447 11.055 - 11.113: 62.0284% ( 54) 00:12:35.447 11.113 - 11.171: 62.4988% ( 48) 00:12:35.447 11.171 - 11.229: 63.0965% ( 61) 00:12:35.447 11.229 - 11.287: 63.7237% ( 64) 00:12:35.447 11.287 - 11.345: 64.1450% ( 43) 00:12:35.447 11.345 - 11.404: 64.6056% ( 47) 00:12:35.447 11.404 - 11.462: 64.9878% ( 39) 00:12:35.447 11.462 - 11.520: 65.2915% ( 31) 00:12:35.447 11.520 - 11.578: 65.5365% ( 25) 00:12:35.447 11.578 - 11.636: 65.7325% ( 20) 00:12:35.447 11.636 - 11.695: 65.9677% ( 24) 00:12:35.447 11.695 - 11.753: 66.1146% ( 15) 00:12:35.447 11.753 - 11.811: 66.2028% ( 9) 00:12:35.447 11.811 - 11.869: 66.2910% ( 9) 00:12:35.447 11.869 - 11.927: 66.4968% ( 21) 00:12:35.447 11.927 - 11.985: 66.7516% ( 26) 00:12:35.447 11.985 - 12.044: 67.2024% ( 46) 00:12:35.447 12.044 - 12.102: 68.0255% ( 84) 00:12:35.447 12.102 - 12.160: 69.4757% ( 148) 00:12:35.447 12.160 - 12.218: 71.2984% ( 186) 00:12:35.447 12.218 - 12.276: 73.6796% ( 243) 00:12:35.447 12.276 - 12.335: 75.9530% ( 232) 00:12:35.447 12.335 - 12.393: 78.2852% ( 238) 00:12:35.447 12.393 - 12.451: 80.2450% ( 200) 00:12:35.447 12.451 - 12.509: 81.7246% ( 151) 00:12:35.447 12.509 - 12.567: 82.9397% ( 124) 00:12:35.447 12.567 - 12.625: 83.8707% ( 95) 00:12:35.447 12.625 - 12.684: 84.4684% ( 61) 00:12:35.447 12.684 - 12.742: 85.0269% ( 57) 00:12:35.447 12.742 - 12.800: 85.4091% ( 39) 00:12:35.447 12.800 - 12.858: 85.6345% ( 23) 00:12:35.447 12.858 - 12.916: 85.8305% ( 20) 00:12:35.447 12.916 - 12.975: 86.0069% ( 18) 00:12:35.447 12.975 - 13.033: 86.1930% ( 19) 00:12:35.447 13.033 - 13.091: 86.3204% ( 13) 00:12:35.447 13.091 - 13.149: 86.4870% ( 17) 00:12:35.447 13.149 - 13.207: 86.6242% ( 14) 00:12:35.447 13.207 - 13.265: 86.8104% ( 19) 00:12:35.447 13.265 - 13.324: 86.8986% ( 9) 00:12:35.447 13.324 - 13.382: 87.0260% ( 13) 00:12:35.447 13.382 - 13.440: 87.1436% ( 12) 00:12:35.447 13.440 - 13.498: 87.2513% ( 11) 00:12:35.447 13.498 - 13.556: 87.3591% ( 11) 00:12:35.447 13.556 - 13.615: 87.5649% ( 21) 00:12:35.447 13.615 - 13.673: 87.7315% ( 17) 00:12:35.447 13.673 - 13.731: 87.9765% ( 25) 00:12:35.447 13.731 - 13.789: 88.1529% ( 18) 00:12:35.447 13.789 - 13.847: 88.3097% ( 16) 00:12:35.447 13.847 - 13.905: 88.4468% ( 14) 00:12:35.447 13.905 - 13.964: 88.5840% ( 14) 00:12:35.447 13.964 - 14.022: 88.6722% ( 9) 00:12:35.447 14.022 - 14.080: 88.7212% ( 5) 00:12:35.447 14.080 - 14.138: 88.7898% ( 7) 00:12:35.447 14.138 - 14.196: 88.8584% ( 7) 00:12:35.447 14.196 - 14.255: 88.8976% ( 4) 00:12:35.447 14.255 - 14.313: 88.9564% ( 6) 00:12:35.447 14.313 - 14.371: 88.9858% ( 3) 00:12:35.447 14.371 - 14.429: 89.0152% ( 3) 00:12:35.447 14.429 - 14.487: 89.0936% ( 8) 00:12:35.447 14.545 - 14.604: 89.1328% ( 4) 00:12:35.447 14.604 - 14.662: 89.1720% ( 4) 00:12:35.447 14.662 - 14.720: 89.2014% ( 3) 00:12:35.447 14.720 - 14.778: 89.2112% ( 1) 00:12:35.447 14.778 - 14.836: 89.2504% ( 4) 00:12:35.447 14.836 - 14.895: 89.2896% ( 4) 00:12:35.447 14.895 - 15.011: 89.4561% ( 17) 00:12:35.447 15.011 - 15.127: 89.6913% ( 24) 00:12:35.447 15.127 - 15.244: 90.1323% ( 45) 00:12:35.447 15.244 - 15.360: 90.3283% ( 20) 00:12:35.447 15.360 - 15.476: 90.4949% ( 17) 00:12:35.447 15.476 - 15.593: 90.5634% ( 7) 00:12:35.447 15.593 - 15.709: 90.7398% ( 18) 00:12:35.447 15.709 - 15.825: 90.8476% ( 11) 00:12:35.447 15.825 - 15.942: 91.0142% ( 17) 00:12:35.447 15.942 - 16.058: 91.1024% ( 9) 00:12:35.447 16.058 - 16.175: 91.2984% ( 20) 00:12:35.447 16.175 - 16.291: 91.5434% ( 25) 00:12:35.447 16.291 - 16.407: 91.9941% ( 46) 00:12:35.447 16.407 - 16.524: 92.6899% ( 71) 00:12:35.447 16.524 - 16.640: 93.5816% ( 91) 00:12:35.447 16.640 - 16.756: 94.0813% ( 51) 00:12:35.447 16.756 - 16.873: 94.7771% ( 71) 00:12:35.447 16.873 - 16.989: 95.2082% ( 44) 00:12:35.447 16.989 - 17.105: 95.4924% ( 29) 00:12:35.447 17.105 - 17.222: 95.7178% ( 23) 00:12:35.447 17.222 - 17.338: 95.8354% ( 12) 00:12:35.447 17.338 - 17.455: 95.9922% ( 16) 00:12:35.447 17.455 - 17.571: 96.1391% ( 15) 00:12:35.447 17.571 - 17.687: 96.2567% ( 12) 00:12:35.447 17.687 - 17.804: 96.3253% ( 7) 00:12:35.447 17.804 - 17.920: 96.4037% ( 8) 00:12:35.447 17.920 - 18.036: 96.5409% ( 14) 00:12:35.447 18.036 - 18.153: 96.6487% ( 11) 00:12:35.447 18.153 - 18.269: 96.7173% ( 7) 00:12:35.447 18.269 - 18.385: 96.8153% ( 10) 00:12:35.447 18.385 - 18.502: 96.9133% ( 10) 00:12:35.447 18.502 - 18.618: 97.0309% ( 12) 00:12:35.447 18.618 - 18.735: 97.1289% ( 10) 00:12:35.447 18.735 - 18.851: 97.1877% ( 6) 00:12:35.447 18.851 - 18.967: 97.2758% ( 9) 00:12:35.447 18.967 - 19.084: 97.3248% ( 5) 00:12:35.447 19.084 - 19.200: 97.3934% ( 7) 00:12:35.447 19.200 - 19.316: 97.4816% ( 9) 00:12:35.447 19.316 - 19.433: 97.5208% ( 4) 00:12:35.447 19.433 - 19.549: 97.5894% ( 7) 00:12:35.447 19.549 - 19.665: 97.6482% ( 6) 00:12:35.447 19.665 - 19.782: 97.7168% ( 7) 00:12:35.447 19.782 - 19.898: 97.7854% ( 7) 00:12:35.447 19.898 - 20.015: 97.8344% ( 5) 00:12:35.447 20.015 - 20.131: 97.8834% ( 5) 00:12:35.447 20.131 - 20.247: 97.9618% ( 8) 00:12:35.447 20.247 - 20.364: 97.9912% ( 3) 00:12:35.447 20.364 - 20.480: 98.0500% ( 6) 00:12:35.447 20.480 - 20.596: 98.1382% ( 9) 00:12:35.447 20.596 - 20.713: 98.1774% ( 4) 00:12:35.447 20.713 - 20.829: 98.1970% ( 2) 00:12:35.447 20.945 - 21.062: 98.2460% ( 5) 00:12:35.447 21.062 - 21.178: 98.2754% ( 3) 00:12:35.447 21.178 - 21.295: 98.3146% ( 4) 00:12:35.447 21.295 - 21.411: 98.3733% ( 6) 00:12:35.447 21.411 - 21.527: 98.4615% ( 9) 00:12:35.447 21.527 - 21.644: 98.5007% ( 4) 00:12:35.447 21.644 - 21.760: 98.5301% ( 3) 00:12:35.447 21.760 - 21.876: 98.5399% ( 1) 00:12:35.447 21.876 - 21.993: 98.5595% ( 2) 00:12:35.447 21.993 - 22.109: 98.5987% ( 4) 00:12:35.447 22.109 - 22.225: 98.6673% ( 7) 00:12:35.447 22.225 - 22.342: 98.7065% ( 4) 00:12:35.447 22.342 - 22.458: 98.7457% ( 4) 00:12:35.447 22.458 - 22.575: 98.7653% ( 2) 00:12:35.447 22.575 - 22.691: 98.7751% ( 1) 00:12:35.447 22.691 - 22.807: 98.7947% ( 2) 00:12:35.447 22.807 - 22.924: 98.8437% ( 5) 00:12:35.447 22.924 - 23.040: 98.8633% ( 2) 00:12:35.447 23.040 - 23.156: 98.9025% ( 4) 00:12:35.447 23.156 - 23.273: 98.9319% ( 3) 00:12:35.447 23.273 - 23.389: 98.9711% ( 4) 00:12:35.447 23.389 - 23.505: 99.0005% ( 3) 00:12:35.447 23.505 - 23.622: 99.0495% ( 5) 00:12:35.447 23.622 - 23.738: 99.0789% ( 3) 00:12:35.448 23.738 - 23.855: 99.0985% ( 2) 00:12:35.448 23.855 - 23.971: 99.1083% ( 1) 00:12:35.448 23.971 - 24.087: 99.1377% ( 3) 00:12:35.448 24.087 - 24.204: 99.1671% ( 3) 00:12:35.448 24.204 - 24.320: 99.1769% ( 1) 00:12:35.448 24.320 - 24.436: 99.2063% ( 3) 00:12:35.448 24.436 - 24.553: 99.2259% ( 2) 00:12:35.448 24.553 - 24.669: 99.2357% ( 1) 00:12:35.448 24.669 - 24.785: 99.2553% ( 2) 00:12:35.448 24.785 - 24.902: 99.2749% ( 2) 00:12:35.448 24.902 - 25.018: 99.2945% ( 2) 00:12:35.448 25.018 - 25.135: 99.3043% ( 1) 00:12:35.448 25.251 - 25.367: 99.3141% ( 1) 00:12:35.448 25.367 - 25.484: 99.3239% ( 1) 00:12:35.448 25.600 - 25.716: 99.3337% ( 1) 00:12:35.448 25.716 - 25.833: 99.3435% ( 1) 00:12:35.448 25.949 - 26.065: 99.3631% ( 2) 00:12:35.448 26.065 - 26.182: 99.3729% ( 1) 00:12:35.448 26.182 - 26.298: 99.3827% ( 1) 00:12:35.448 26.298 - 26.415: 99.3925% ( 1) 00:12:35.448 26.415 - 26.531: 99.4023% ( 1) 00:12:35.448 26.531 - 26.647: 99.4219% ( 2) 00:12:35.448 26.647 - 26.764: 99.4317% ( 1) 00:12:35.448 26.764 - 26.880: 99.4415% ( 1) 00:12:35.448 26.880 - 26.996: 99.4512% ( 1) 00:12:35.448 27.229 - 27.345: 99.4610% ( 1) 00:12:35.448 27.462 - 27.578: 99.4806% ( 2) 00:12:35.448 27.578 - 27.695: 99.5002% ( 2) 00:12:35.448 27.811 - 27.927: 99.5100% ( 1) 00:12:35.448 27.927 - 28.044: 99.5394% ( 3) 00:12:35.448 28.044 - 28.160: 99.5590% ( 2) 00:12:35.448 28.160 - 28.276: 99.5688% ( 1) 00:12:35.448 28.276 - 28.393: 99.5884% ( 2) 00:12:35.448 28.625 - 28.742: 99.5982% ( 1) 00:12:35.448 28.858 - 28.975: 99.6080% ( 1) 00:12:35.448 29.091 - 29.207: 99.6178% ( 1) 00:12:35.448 29.324 - 29.440: 99.6276% ( 1) 00:12:35.448 29.556 - 29.673: 99.6374% ( 1) 00:12:35.448 30.022 - 30.255: 99.6472% ( 1) 00:12:35.448 30.487 - 30.720: 99.6668% ( 2) 00:12:35.448 31.418 - 31.651: 99.6766% ( 1) 00:12:35.448 31.884 - 32.116: 99.6962% ( 2) 00:12:35.448 32.116 - 32.349: 99.7158% ( 2) 00:12:35.448 32.582 - 32.815: 99.7354% ( 2) 00:12:35.448 33.047 - 33.280: 99.7452% ( 1) 00:12:35.448 33.280 - 33.513: 99.7746% ( 3) 00:12:35.448 33.513 - 33.745: 99.8040% ( 3) 00:12:35.448 33.978 - 34.211: 99.8138% ( 1) 00:12:35.448 36.538 - 36.771: 99.8236% ( 1) 00:12:35.448 37.004 - 37.236: 99.8334% ( 1) 00:12:35.448 37.935 - 38.167: 99.8432% ( 1) 00:12:35.448 38.400 - 38.633: 99.8530% ( 1) 00:12:35.448 39.098 - 39.331: 99.8628% ( 1) 00:12:35.448 39.331 - 39.564: 99.8726% ( 1) 00:12:35.448 40.960 - 41.193: 99.8824% ( 1) 00:12:35.448 42.822 - 43.055: 99.8922% ( 1) 00:12:35.448 43.287 - 43.520: 99.9020% ( 1) 00:12:35.448 45.149 - 45.382: 99.9118% ( 1) 00:12:35.448 51.898 - 52.131: 99.9216% ( 1) 00:12:35.448 58.647 - 58.880: 99.9314% ( 1) 00:12:35.448 60.509 - 60.975: 99.9412% ( 1) 00:12:35.448 60.975 - 61.440: 99.9510% ( 1) 00:12:35.448 72.611 - 73.076: 99.9608% ( 1) 00:12:35.448 79.127 - 79.593: 99.9706% ( 1) 00:12:35.448 115.433 - 115.898: 99.9804% ( 1) 00:12:35.448 121.018 - 121.949: 99.9902% ( 1) 00:12:35.448 1295.825 - 1303.273: 100.0000% ( 1) 00:12:35.448 00:12:35.448 00:12:35.448 real 0m1.316s 00:12:35.448 user 0m1.121s 00:12:35.448 sys 0m0.144s 00:12:35.448 17:03:27 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.448 17:03:27 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:35.448 ************************************ 00:12:35.448 END TEST nvme_overhead 00:12:35.448 ************************************ 00:12:35.448 17:03:27 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:35.448 17:03:27 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:12:35.448 17:03:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.448 17:03:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.448 ************************************ 00:12:35.448 START TEST nvme_arbitration 00:12:35.448 ************************************ 00:12:35.448 17:03:27 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:39.658 Initializing NVMe Controllers 00:12:39.658 Attached to 0000:00:10.0 00:12:39.658 Attached to 0000:00:11.0 00:12:39.658 Attached to 0000:00:13.0 00:12:39.658 Attached to 0000:00:12.0 00:12:39.658 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:39.658 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:39.658 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:39.658 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:39.658 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:39.658 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:39.658 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:39.658 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:39.658 Initialization complete. Launching workers. 00:12:39.658 Starting thread on core 1 with urgent priority queue 00:12:39.659 Starting thread on core 2 with urgent priority queue 00:12:39.659 Starting thread on core 3 with urgent priority queue 00:12:39.659 Starting thread on core 0 with urgent priority queue 00:12:39.659 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:12:39.659 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:12:39.659 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:12:39.659 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:12:39.659 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:12:39.659 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:12:39.659 ======================================================== 00:12:39.659 00:12:39.659 ************************************ 00:12:39.659 END TEST nvme_arbitration 00:12:39.659 ************************************ 00:12:39.659 00:12:39.659 real 0m3.501s 00:12:39.659 user 0m9.321s 00:12:39.659 sys 0m0.190s 00:12:39.659 17:03:31 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.659 17:03:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:39.659 17:03:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.659 ************************************ 00:12:39.659 START TEST nvme_single_aen 00:12:39.659 ************************************ 00:12:39.659 17:03:31 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:39.659 Asynchronous Event Request test 00:12:39.659 Attached to 0000:00:10.0 00:12:39.659 Attached to 0000:00:11.0 00:12:39.659 Attached to 0000:00:13.0 00:12:39.659 Attached to 0000:00:12.0 00:12:39.659 Reset controller to setup AER completions for this process 00:12:39.659 Registering asynchronous event callbacks... 00:12:39.659 Getting orig temperature thresholds of all controllers 00:12:39.659 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:39.659 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:39.659 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:39.659 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:39.659 Setting all controllers temperature threshold low to trigger AER 00:12:39.659 Waiting for all controllers temperature threshold to be set lower 00:12:39.659 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:39.659 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:39.659 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:39.659 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:39.659 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:39.659 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:39.659 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:39.659 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:39.659 Waiting for all controllers to trigger AER and reset threshold 00:12:39.659 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:39.659 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:39.659 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:39.659 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:39.659 Cleaning up... 00:12:39.659 ************************************ 00:12:39.659 END TEST nvme_single_aen 00:12:39.659 ************************************ 00:12:39.659 00:12:39.659 real 0m0.273s 00:12:39.659 user 0m0.106s 00:12:39.659 sys 0m0.121s 00:12:39.659 17:03:31 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.659 17:03:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:39.659 17:03:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.659 17:03:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.659 ************************************ 00:12:39.659 START TEST nvme_doorbell_aers 00:12:39.659 ************************************ 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:39.659 17:03:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:39.659 [2024-07-25 17:03:31.966139] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:12:49.623 Executing: test_write_invalid_db 00:12:49.623 Waiting for AER completion... 00:12:49.623 Failure: test_write_invalid_db 00:12:49.623 00:12:49.623 Executing: test_invalid_db_write_overflow_sq 00:12:49.623 Waiting for AER completion... 00:12:49.623 Failure: test_invalid_db_write_overflow_sq 00:12:49.623 00:12:49.623 Executing: test_invalid_db_write_overflow_cq 00:12:49.623 Waiting for AER completion... 00:12:49.623 Failure: test_invalid_db_write_overflow_cq 00:12:49.623 00:12:49.623 17:03:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:49.623 17:03:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:49.623 [2024-07-25 17:03:42.051212] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:12:59.592 Executing: test_write_invalid_db 00:12:59.592 Waiting for AER completion... 00:12:59.592 Failure: test_write_invalid_db 00:12:59.592 00:12:59.593 Executing: test_invalid_db_write_overflow_sq 00:12:59.593 Waiting for AER completion... 00:12:59.593 Failure: test_invalid_db_write_overflow_sq 00:12:59.593 00:12:59.593 Executing: test_invalid_db_write_overflow_cq 00:12:59.593 Waiting for AER completion... 00:12:59.593 Failure: test_invalid_db_write_overflow_cq 00:12:59.593 00:12:59.593 17:03:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:59.593 17:03:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:59.593 [2024-07-25 17:03:52.052383] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:09.567 Executing: test_write_invalid_db 00:13:09.567 Waiting for AER completion... 00:13:09.567 Failure: test_write_invalid_db 00:13:09.567 00:13:09.567 Executing: test_invalid_db_write_overflow_sq 00:13:09.567 Waiting for AER completion... 00:13:09.567 Failure: test_invalid_db_write_overflow_sq 00:13:09.567 00:13:09.567 Executing: test_invalid_db_write_overflow_cq 00:13:09.567 Waiting for AER completion... 00:13:09.567 Failure: test_invalid_db_write_overflow_cq 00:13:09.567 00:13:09.567 17:04:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:09.567 17:04:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:09.826 [2024-07-25 17:04:02.153693] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.830 Executing: test_write_invalid_db 00:13:19.831 Waiting for AER completion... 00:13:19.831 Failure: test_write_invalid_db 00:13:19.831 00:13:19.831 Executing: test_invalid_db_write_overflow_sq 00:13:19.831 Waiting for AER completion... 00:13:19.831 Failure: test_invalid_db_write_overflow_sq 00:13:19.831 00:13:19.831 Executing: test_invalid_db_write_overflow_cq 00:13:19.831 Waiting for AER completion... 00:13:19.831 Failure: test_invalid_db_write_overflow_cq 00:13:19.831 00:13:19.831 00:13:19.831 real 0m40.274s 00:13:19.831 user 0m34.206s 00:13:19.831 sys 0m5.685s 00:13:19.831 ************************************ 00:13:19.831 END TEST nvme_doorbell_aers 00:13:19.831 ************************************ 00:13:19.831 17:04:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.831 17:04:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 17:04:11 nvme -- nvme/nvme.sh@97 -- # uname 00:13:19.831 17:04:11 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:19.831 17:04:11 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:19.831 17:04:11 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:13:19.831 17:04:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.831 17:04:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 ************************************ 00:13:19.831 START TEST nvme_multi_aen 00:13:19.831 ************************************ 00:13:19.831 17:04:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:19.831 [2024-07-25 17:04:12.237184] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.237330] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.237373] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.239888] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.239953] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.239990] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.241765] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.241823] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.241861] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.243824] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.243887] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 [2024-07-25 17:04:12.243910] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69024) is not found. Dropping the request. 00:13:19.831 Child process pid: 69540 00:13:20.089 [Child] Asynchronous Event Request test 00:13:20.089 [Child] Attached to 0000:00:10.0 00:13:20.089 [Child] Attached to 0000:00:11.0 00:13:20.089 [Child] Attached to 0000:00:13.0 00:13:20.089 [Child] Attached to 0000:00:12.0 00:13:20.089 [Child] Registering asynchronous event callbacks... 00:13:20.089 [Child] Getting orig temperature thresholds of all controllers 00:13:20.089 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:20.089 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.089 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.089 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.089 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.089 [Child] Cleaning up... 00:13:20.089 Asynchronous Event Request test 00:13:20.089 Attached to 0000:00:10.0 00:13:20.089 Attached to 0000:00:11.0 00:13:20.089 Attached to 0000:00:13.0 00:13:20.089 Attached to 0000:00:12.0 00:13:20.089 Reset controller to setup AER completions for this process 00:13:20.089 Registering asynchronous event callbacks... 00:13:20.089 Getting orig temperature thresholds of all controllers 00:13:20.089 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:20.089 Setting all controllers temperature threshold low to trigger AER 00:13:20.089 Waiting for all controllers temperature threshold to be set lower 00:13:20.089 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:20.089 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.089 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:20.090 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.090 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:20.090 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:20.090 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:20.090 Waiting for all controllers to trigger AER and reset threshold 00:13:20.090 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.090 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.090 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.090 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:20.090 Cleaning up... 00:13:20.360 00:13:20.360 real 0m0.607s 00:13:20.360 user 0m0.239s 00:13:20.360 sys 0m0.263s 00:13:20.360 17:04:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.360 17:04:12 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:20.360 ************************************ 00:13:20.360 END TEST nvme_multi_aen 00:13:20.360 ************************************ 00:13:20.360 17:04:12 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:20.360 17:04:12 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:20.360 17:04:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.360 17:04:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.360 ************************************ 00:13:20.360 START TEST nvme_startup 00:13:20.360 ************************************ 00:13:20.360 17:04:12 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:20.630 Initializing NVMe Controllers 00:13:20.630 Attached to 0000:00:10.0 00:13:20.630 Attached to 0000:00:11.0 00:13:20.630 Attached to 0000:00:13.0 00:13:20.630 Attached to 0000:00:12.0 00:13:20.630 Initialization complete. 00:13:20.630 Time used:208802.562 (us). 00:13:20.630 ************************************ 00:13:20.630 END TEST nvme_startup 00:13:20.630 ************************************ 00:13:20.630 00:13:20.630 real 0m0.313s 00:13:20.630 user 0m0.117s 00:13:20.630 sys 0m0.151s 00:13:20.630 17:04:12 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.630 17:04:12 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:20.630 17:04:12 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:20.630 17:04:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:20.630 17:04:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.630 17:04:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.630 ************************************ 00:13:20.630 START TEST nvme_multi_secondary 00:13:20.630 ************************************ 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69596 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69597 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:20.630 17:04:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:24.818 Initializing NVMe Controllers 00:13:24.818 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:24.818 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:24.818 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:24.818 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:24.818 Initialization complete. Launching workers. 00:13:24.818 ======================================================== 00:13:24.818 Latency(us) 00:13:24.818 Device Information : IOPS MiB/s Average min max 00:13:24.818 PCIE (0000:00:10.0) NSID 1 from core 1: 4756.31 18.58 3361.79 1641.52 7221.98 00:13:24.818 PCIE (0000:00:11.0) NSID 1 from core 1: 4756.31 18.58 3363.42 1741.47 7543.96 00:13:24.818 PCIE (0000:00:13.0) NSID 1 from core 1: 4756.31 18.58 3363.56 1711.31 7662.04 00:13:24.818 PCIE (0000:00:12.0) NSID 1 from core 1: 4756.31 18.58 3363.60 1661.56 7139.53 00:13:24.818 PCIE (0000:00:12.0) NSID 2 from core 1: 4756.31 18.58 3363.57 1715.40 8265.30 00:13:24.818 PCIE (0000:00:12.0) NSID 3 from core 1: 4756.31 18.58 3363.72 1648.77 8737.23 00:13:24.818 ======================================================== 00:13:24.818 Total : 28537.85 111.48 3363.28 1641.52 8737.23 00:13:24.818 00:13:24.818 Initializing NVMe Controllers 00:13:24.818 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:24.818 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:24.818 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:24.818 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:24.818 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:24.818 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:24.818 Initialization complete. Launching workers. 00:13:24.818 ======================================================== 00:13:24.818 Latency(us) 00:13:24.818 Device Information : IOPS MiB/s Average min max 00:13:24.818 PCIE (0000:00:10.0) NSID 1 from core 2: 2285.28 8.93 6998.76 2075.64 15205.77 00:13:24.818 PCIE (0000:00:11.0) NSID 1 from core 2: 2285.28 8.93 7000.83 2014.40 16674.48 00:13:24.818 PCIE (0000:00:13.0) NSID 1 from core 2: 2285.28 8.93 7001.40 1702.99 16592.88 00:13:24.818 PCIE (0000:00:12.0) NSID 1 from core 2: 2285.28 8.93 7006.04 2146.35 16856.42 00:13:24.819 PCIE (0000:00:12.0) NSID 2 from core 2: 2285.28 8.93 7010.33 2112.96 16557.95 00:13:24.819 PCIE (0000:00:12.0) NSID 3 from core 2: 2285.28 8.93 7010.16 2073.31 13521.48 00:13:24.819 ======================================================== 00:13:24.819 Total : 13711.69 53.56 7004.59 1702.99 16856.42 00:13:24.819 00:13:24.819 17:04:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69596 00:13:26.194 Initializing NVMe Controllers 00:13:26.194 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:26.194 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:26.194 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:26.194 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:26.194 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:26.194 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:26.194 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:26.194 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:26.194 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:26.194 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:26.194 Initialization complete. Launching workers. 00:13:26.194 ======================================================== 00:13:26.194 Latency(us) 00:13:26.194 Device Information : IOPS MiB/s Average min max 00:13:26.194 PCIE (0000:00:10.0) NSID 1 from core 0: 6831.78 26.69 2340.08 1094.00 6286.30 00:13:26.194 PCIE (0000:00:11.0) NSID 1 from core 0: 6831.78 26.69 2341.48 1132.60 6422.76 00:13:26.194 PCIE (0000:00:13.0) NSID 1 from core 0: 6831.78 26.69 2341.40 1131.12 6413.82 00:13:26.194 PCIE (0000:00:12.0) NSID 1 from core 0: 6831.78 26.69 2341.34 1137.11 6154.51 00:13:26.194 PCIE (0000:00:12.0) NSID 2 from core 0: 6831.78 26.69 2341.28 1090.19 6137.57 00:13:26.194 PCIE (0000:00:12.0) NSID 3 from core 0: 6831.78 26.69 2341.22 917.32 6211.98 00:13:26.194 ======================================================== 00:13:26.194 Total : 40990.70 160.12 2341.13 917.32 6422.76 00:13:26.194 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69597 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69666 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69667 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:26.194 17:04:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:29.511 Initializing NVMe Controllers 00:13:29.511 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:29.511 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:29.511 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:29.511 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:29.511 Initialization complete. Launching workers. 00:13:29.511 ======================================================== 00:13:29.511 Latency(us) 00:13:29.511 Device Information : IOPS MiB/s Average min max 00:13:29.511 PCIE (0000:00:10.0) NSID 1 from core 0: 5153.03 20.13 3102.88 1128.60 7280.63 00:13:29.511 PCIE (0000:00:11.0) NSID 1 from core 0: 5153.03 20.13 3104.25 1159.60 7581.84 00:13:29.511 PCIE (0000:00:13.0) NSID 1 from core 0: 5153.03 20.13 3104.14 1186.02 7784.00 00:13:29.511 PCIE (0000:00:12.0) NSID 1 from core 0: 5153.03 20.13 3104.01 1176.91 8168.92 00:13:29.511 PCIE (0000:00:12.0) NSID 2 from core 0: 5153.03 20.13 3104.00 1193.45 7941.89 00:13:29.511 PCIE (0000:00:12.0) NSID 3 from core 0: 5153.03 20.13 3103.85 1156.45 6925.38 00:13:29.511 ======================================================== 00:13:29.511 Total : 30918.16 120.77 3103.86 1128.60 8168.92 00:13:29.511 00:13:29.511 Initializing NVMe Controllers 00:13:29.511 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:29.511 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:29.511 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:29.511 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:29.511 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:29.511 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:29.511 Initialization complete. Launching workers. 00:13:29.511 ======================================================== 00:13:29.511 Latency(us) 00:13:29.511 Device Information : IOPS MiB/s Average min max 00:13:29.511 PCIE (0000:00:10.0) NSID 1 from core 1: 4916.18 19.20 3252.48 1063.68 7514.41 00:13:29.511 PCIE (0000:00:11.0) NSID 1 from core 1: 4916.18 19.20 3253.81 1091.97 6632.51 00:13:29.511 PCIE (0000:00:13.0) NSID 1 from core 1: 4916.18 19.20 3253.64 1047.84 7323.49 00:13:29.511 PCIE (0000:00:12.0) NSID 1 from core 1: 4916.18 19.20 3253.42 1032.50 7319.71 00:13:29.511 PCIE (0000:00:12.0) NSID 2 from core 1: 4916.18 19.20 3253.24 1002.60 7868.43 00:13:29.511 PCIE (0000:00:12.0) NSID 3 from core 1: 4916.18 19.20 3253.08 939.99 7494.29 00:13:29.511 ======================================================== 00:13:29.511 Total : 29497.11 115.22 3253.28 939.99 7868.43 00:13:29.511 00:13:31.467 Initializing NVMe Controllers 00:13:31.467 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:31.467 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:31.467 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:31.467 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:31.467 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:31.467 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:31.467 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:31.467 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:31.467 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:31.467 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:31.467 Initialization complete. Launching workers. 00:13:31.467 ======================================================== 00:13:31.467 Latency(us) 00:13:31.467 Device Information : IOPS MiB/s Average min max 00:13:31.468 PCIE (0000:00:10.0) NSID 1 from core 2: 3219.17 12.57 4967.57 1093.37 17062.29 00:13:31.468 PCIE (0000:00:11.0) NSID 1 from core 2: 3222.37 12.59 4964.61 1083.55 17174.01 00:13:31.468 PCIE (0000:00:13.0) NSID 1 from core 2: 3222.37 12.59 4964.65 1119.00 17462.98 00:13:31.468 PCIE (0000:00:12.0) NSID 1 from core 2: 3222.37 12.59 4964.02 1124.03 14279.11 00:13:31.468 PCIE (0000:00:12.0) NSID 2 from core 2: 3222.37 12.59 4964.16 1102.16 15608.85 00:13:31.468 PCIE (0000:00:12.0) NSID 3 from core 2: 3222.37 12.59 4964.30 1001.90 18828.67 00:13:31.468 ======================================================== 00:13:31.468 Total : 19331.00 75.51 4964.88 1001.90 18828.67 00:13:31.468 00:13:31.468 ************************************ 00:13:31.468 END TEST nvme_multi_secondary 00:13:31.468 ************************************ 00:13:31.468 17:04:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69666 00:13:31.468 17:04:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69667 00:13:31.468 00:13:31.468 real 0m10.732s 00:13:31.468 user 0m18.595s 00:13:31.468 sys 0m1.076s 00:13:31.468 17:04:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.468 17:04:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:31.468 17:04:23 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:31.468 17:04:23 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:31.468 17:04:23 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68605 ]] 00:13:31.468 17:04:23 nvme -- common/autotest_common.sh@1090 -- # kill 68605 00:13:31.468 17:04:23 nvme -- common/autotest_common.sh@1091 -- # wait 68605 00:13:31.468 [2024-07-25 17:04:23.765079] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.765147] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.765168] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.765187] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.767331] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.767539] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.767695] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.767878] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.770159] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.770359] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.770483] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.770512] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.772514] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.772558] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.772578] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.468 [2024-07-25 17:04:23.772596] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69539) is not found. Dropping the request. 00:13:31.726 17:04:24 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:13:31.726 17:04:24 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:13:31.726 17:04:24 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:31.726 17:04:24 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:31.726 17:04:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.726 17:04:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.726 ************************************ 00:13:31.726 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:31.726 ************************************ 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:31.726 * Looking for test storage... 00:13:31.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:31.726 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:13:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69823 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69823 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69823 ']' 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.984 17:04:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:31.984 [2024-07-25 17:04:24.348731] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:31.984 [2024-07-25 17:04:24.349457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69823 ] 00:13:32.241 [2024-07-25 17:04:24.547793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.500 [2024-07-25 17:04:24.856002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.500 [2024-07-25 17:04:24.856072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.500 [2024-07-25 17:04:24.856118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.500 [2024-07-25 17:04:24.856146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:33.434 nvme0n1 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_kXL5I.txt 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:33.434 true 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721927065 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69846 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:33.434 17:04:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:35.962 [2024-07-25 17:04:27.875432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:13:35.962 [2024-07-25 17:04:27.875838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:35.962 [2024-07-25 17:04:27.875872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:35.962 [2024-07-25 17:04:27.875895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.962 [2024-07-25 17:04:27.878257] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:35.962 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69846 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69846 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69846 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_kXL5I.txt 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_kXL5I.txt 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69823 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69823 ']' 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69823 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.962 17:04:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69823 00:13:35.962 killing process with pid 69823 00:13:35.962 17:04:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.962 17:04:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.962 17:04:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69823' 00:13:35.962 17:04:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69823 00:13:35.962 17:04:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69823 00:13:37.862 17:04:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:37.862 17:04:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:37.862 00:13:37.862 real 0m6.221s 00:13:37.862 user 0m21.011s 00:13:37.862 sys 0m0.868s 00:13:37.862 17:04:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.862 17:04:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:37.862 ************************************ 00:13:37.863 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:37.863 ************************************ 00:13:37.863 17:04:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:37.863 17:04:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:37.863 17:04:30 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:37.863 17:04:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.863 17:04:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.863 ************************************ 00:13:37.863 START TEST nvme_fio 00:13:37.863 ************************************ 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:13:37.863 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:37.863 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:37.863 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:37.863 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:38.137 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:38.137 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:38.137 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:38.137 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:38.137 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:38.137 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:38.137 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:38.394 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:38.394 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:38.651 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:38.651 17:04:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:38.651 17:04:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:38.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:38.909 fio-3.35 00:13:38.909 Starting 1 thread 00:13:42.228 00:13:42.228 test: (groupid=0, jobs=1): err= 0: pid=70004: Thu Jul 25 17:04:34 2024 00:13:42.228 read: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec) 00:13:42.228 slat (usec): min=4, max=106, avg= 6.34, stdev= 1.76 00:13:42.228 clat (usec): min=343, max=10176, avg=3881.22, stdev=368.60 00:13:42.228 lat (usec): min=349, max=10282, avg=3887.55, stdev=369.13 00:13:42.228 clat percentiles (usec): 00:13:42.228 | 1.00th=[ 3228], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3687], 00:13:42.228 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:13:42.228 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4686], 00:13:42.228 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6980], 99.95th=[ 8586], 00:13:42.228 | 99.99th=[ 9896] 00:13:42.228 bw ( KiB/s): min=62920, max=68264, per=100.00%, avg=65536.00, stdev=2673.76, samples=3 00:13:42.228 iops : min=15730, max=17066, avg=16384.00, stdev=668.44, samples=3 00:13:42.228 write: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec); 0 zone resets 00:13:42.228 slat (nsec): min=4935, max=43146, avg=6479.35, stdev=1726.32 00:13:42.228 clat (usec): min=268, max=9982, avg=3892.46, stdev=367.14 00:13:42.228 lat (usec): min=274, max=10007, avg=3898.94, stdev=367.59 00:13:42.228 clat percentiles (usec): 00:13:42.228 | 1.00th=[ 3261], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720], 00:13:42.228 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3884], 00:13:42.228 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4686], 00:13:42.228 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 7439], 99.95th=[ 8586], 00:13:42.228 | 99.99th=[ 9765] 00:13:42.228 bw ( KiB/s): min=63240, max=68216, per=99.59%, avg=65376.00, stdev=2561.61, samples=3 00:13:42.228 iops : min=15810, max=17054, avg=16344.00, stdev=640.40, samples=3 00:13:42.228 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:13:42.228 lat (msec) : 2=0.05%, 4=82.61%, 10=17.29%, 20=0.01% 00:13:42.228 cpu : usr=98.95%, sys=0.15%, ctx=7, majf=0, minf=606 00:13:42.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:42.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.228 issued rwts: total=32779,32839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.228 00:13:42.228 Run status group 0 (all jobs): 00:13:42.228 READ: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:13:42.228 WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (135MB), run=2001-2001msec 00:13:42.228 ----------------------------------------------------- 00:13:42.228 Suppressions used: 00:13:42.229 count bytes template 00:13:42.229 1 32 /usr/src/fio/parse.c 00:13:42.229 1 8 libtcmalloc_minimal.so 00:13:42.229 ----------------------------------------------------- 00:13:42.229 00:13:42.229 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:42.229 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:42.229 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:42.229 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:42.486 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:42.486 17:04:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:42.744 17:04:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:42.744 17:04:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:42.744 17:04:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:43.002 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:43.002 fio-3.35 00:13:43.002 Starting 1 thread 00:13:46.294 00:13:46.294 test: (groupid=0, jobs=1): err= 0: pid=70069: Thu Jul 25 17:04:38 2024 00:13:46.294 read: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(115MiB/2001msec) 00:13:46.294 slat (nsec): min=4687, max=59744, avg=7027.68, stdev=2370.38 00:13:46.294 clat (usec): min=309, max=8921, avg=4315.17, stdev=784.30 00:13:46.294 lat (usec): min=320, max=8981, avg=4322.20, stdev=785.41 00:13:46.294 clat percentiles (usec): 00:13:46.294 | 1.00th=[ 3261], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720], 00:13:46.294 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4047], 60.00th=[ 4424], 00:13:46.294 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5735], 00:13:46.294 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 8094], 99.95th=[ 8160], 00:13:46.294 | 99.99th=[ 8848] 00:13:46.294 bw ( KiB/s): min=55744, max=63648, per=100.00%, avg=59354.67, stdev=3995.98, samples=3 00:13:46.294 iops : min=13936, max=15912, avg=14838.67, stdev=998.99, samples=3 00:13:46.294 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec); 0 zone resets 00:13:46.294 slat (nsec): min=4879, max=40807, avg=7186.90, stdev=2478.93 00:13:46.294 clat (usec): min=412, max=8740, avg=4325.64, stdev=787.20 00:13:46.294 lat (usec): min=422, max=8771, avg=4332.82, stdev=788.34 00:13:46.294 clat percentiles (usec): 00:13:46.294 | 1.00th=[ 3294], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3720], 00:13:46.294 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4490], 00:13:46.294 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5735], 00:13:46.294 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 8160], 99.95th=[ 8225], 00:13:46.294 | 99.99th=[ 8586] 00:13:46.294 bw ( KiB/s): min=56032, max=63064, per=100.00%, avg=59152.00, stdev=3582.28, samples=3 00:13:46.294 iops : min=14008, max=15766, avg=14788.00, stdev=895.57, samples=3 00:13:46.294 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:46.294 lat (msec) : 2=0.04%, 4=46.73%, 10=53.21% 00:13:46.294 cpu : usr=98.95%, sys=0.05%, ctx=9, majf=0, minf=607 00:13:46.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:46.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:46.294 issued rwts: total=29501,29538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:46.294 00:13:46.294 Run status group 0 (all jobs): 00:13:46.294 READ: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=115MiB (121MB), run=2001-2001msec 00:13:46.294 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:13:46.294 ----------------------------------------------------- 00:13:46.294 Suppressions used: 00:13:46.294 count bytes template 00:13:46.294 1 32 /usr/src/fio/parse.c 00:13:46.294 1 8 libtcmalloc_minimal.so 00:13:46.294 ----------------------------------------------------- 00:13:46.294 00:13:46.294 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:46.294 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:46.294 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:46.294 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:46.553 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:46.553 17:04:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:46.811 17:04:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:46.811 17:04:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:46.811 17:04:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:47.069 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:47.069 fio-3.35 00:13:47.069 Starting 1 thread 00:13:50.350 00:13:50.350 test: (groupid=0, jobs=1): err= 0: pid=70131: Thu Jul 25 17:04:42 2024 00:13:50.350 read: IOPS=14.0k, BW=54.6MiB/s (57.2MB/s)(109MiB/2001msec) 00:13:50.350 slat (usec): min=4, max=744, avg= 7.92, stdev= 4.99 00:13:50.350 clat (usec): min=255, max=10004, avg=4555.85, stdev=589.83 00:13:50.350 lat (usec): min=263, max=10011, avg=4563.77, stdev=590.70 00:13:50.350 clat percentiles (usec): 00:13:50.350 | 1.00th=[ 3556], 5.00th=[ 4113], 10.00th=[ 4178], 20.00th=[ 4228], 00:13:50.350 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4359], 60.00th=[ 4424], 00:13:50.351 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5407], 00:13:50.351 | 99.00th=[ 7046], 99.50th=[ 8094], 99.90th=[ 9503], 99.95th=[ 9765], 00:13:50.351 | 99.99th=[ 9896] 00:13:50.351 bw ( KiB/s): min=48192, max=58752, per=97.76%, avg=54629.33, stdev=5647.71, samples=3 00:13:50.351 iops : min=12048, max=14688, avg=13657.33, stdev=1411.93, samples=3 00:13:50.351 write: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(109MiB/2001msec); 0 zone resets 00:13:50.351 slat (nsec): min=5164, max=58986, avg=7987.24, stdev=2317.20 00:13:50.351 clat (usec): min=306, max=9950, avg=4563.94, stdev=596.71 00:13:50.351 lat (usec): min=314, max=9957, avg=4571.93, stdev=597.52 00:13:50.351 clat percentiles (usec): 00:13:50.351 | 1.00th=[ 3523], 5.00th=[ 4113], 10.00th=[ 4178], 20.00th=[ 4228], 00:13:50.351 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:13:50.351 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5407], 00:13:50.351 | 99.00th=[ 7177], 99.50th=[ 8160], 99.90th=[ 9765], 99.95th=[ 9765], 00:13:50.351 | 99.99th=[ 9896] 00:13:50.351 bw ( KiB/s): min=48784, max=58688, per=97.73%, avg=54656.00, stdev=5202.07, samples=3 00:13:50.351 iops : min=12196, max=14672, avg=13664.00, stdev=1300.52, samples=3 00:13:50.351 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:50.351 lat (msec) : 2=0.05%, 4=3.07%, 10=96.84%, 20=0.01% 00:13:50.351 cpu : usr=98.75%, sys=0.15%, ctx=4, majf=0, minf=606 00:13:50.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:50.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:50.351 issued rwts: total=27953,27977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:50.351 00:13:50.351 Run status group 0 (all jobs): 00:13:50.351 READ: bw=54.6MiB/s (57.2MB/s), 54.6MiB/s-54.6MiB/s (57.2MB/s-57.2MB/s), io=109MiB (114MB), run=2001-2001msec 00:13:50.351 WRITE: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=109MiB (115MB), run=2001-2001msec 00:13:50.351 ----------------------------------------------------- 00:13:50.351 Suppressions used: 00:13:50.351 count bytes template 00:13:50.351 1 32 /usr/src/fio/parse.c 00:13:50.351 1 8 libtcmalloc_minimal.so 00:13:50.351 ----------------------------------------------------- 00:13:50.351 00:13:50.351 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:50.351 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:50.351 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:50.351 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:50.609 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:50.609 17:04:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:50.867 17:04:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:50.867 17:04:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:50.867 17:04:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:51.125 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:51.125 fio-3.35 00:13:51.125 Starting 1 thread 00:13:56.440 00:13:56.440 test: (groupid=0, jobs=1): err= 0: pid=70195: Thu Jul 25 17:04:47 2024 00:13:56.440 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:13:56.440 slat (usec): min=4, max=107, avg= 7.00, stdev= 2.24 00:13:56.440 clat (usec): min=275, max=9512, avg=4028.54, stdev=541.62 00:13:56.440 lat (usec): min=282, max=9555, avg=4035.54, stdev=542.53 00:13:56.440 clat percentiles (usec): 00:13:56.440 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3556], 00:13:56.440 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:13:56.440 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4883], 95.00th=[ 5014], 00:13:56.440 | 99.00th=[ 5407], 99.50th=[ 6521], 99.90th=[ 7963], 99.95th=[ 8094], 00:13:56.440 | 99.99th=[ 9372] 00:13:56.440 bw ( KiB/s): min=59168, max=67920, per=99.98%, avg=63149.33, stdev=4429.07, samples=3 00:13:56.440 iops : min=14792, max=16980, avg=15787.33, stdev=1107.27, samples=3 00:13:56.440 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:13:56.440 slat (nsec): min=4964, max=44221, avg=7181.02, stdev=2045.53 00:13:56.440 clat (usec): min=249, max=9366, avg=4039.27, stdev=541.04 00:13:56.440 lat (usec): min=256, max=9385, avg=4046.45, stdev=541.91 00:13:56.440 clat percentiles (usec): 00:13:56.440 | 1.00th=[ 3228], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3556], 00:13:56.440 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:13:56.440 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4883], 95.00th=[ 5014], 00:13:56.440 | 99.00th=[ 5407], 99.50th=[ 6521], 99.90th=[ 7832], 99.95th=[ 8094], 00:13:56.440 | 99.99th=[ 9110] 00:13:56.440 bw ( KiB/s): min=58560, max=68096, per=99.39%, avg=62850.67, stdev=4839.15, samples=3 00:13:56.440 iops : min=14640, max=17024, avg=15712.67, stdev=1209.79, samples=3 00:13:56.440 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:56.440 lat (msec) : 2=0.05%, 4=47.73%, 10=52.18% 00:13:56.440 cpu : usr=98.90%, sys=0.15%, ctx=14, majf=0, minf=604 00:13:56.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:56.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:56.440 issued rwts: total=31596,31634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:56.440 00:13:56.440 Run status group 0 (all jobs): 00:13:56.440 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:13:56.440 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:13:56.440 ----------------------------------------------------- 00:13:56.440 Suppressions used: 00:13:56.440 count bytes template 00:13:56.440 1 32 /usr/src/fio/parse.c 00:13:56.440 1 8 libtcmalloc_minimal.so 00:13:56.440 ----------------------------------------------------- 00:13:56.440 00:13:56.440 ************************************ 00:13:56.440 END TEST nvme_fio 00:13:56.440 ************************************ 00:13:56.440 17:04:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:56.440 17:04:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:56.440 00:13:56.440 real 0m17.877s 00:13:56.440 user 0m13.158s 00:13:56.440 sys 0m3.592s 00:13:56.440 17:04:48 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.440 17:04:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:56.440 ************************************ 00:13:56.440 END TEST nvme 00:13:56.440 ************************************ 00:13:56.440 00:13:56.440 real 1m31.994s 00:13:56.440 user 3m42.795s 00:13:56.440 sys 0m17.067s 00:13:56.440 17:04:48 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.440 17:04:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.440 17:04:48 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:13:56.440 17:04:48 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:56.440 17:04:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.440 17:04:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.440 17:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:56.440 ************************************ 00:13:56.440 START TEST nvme_scc 00:13:56.440 ************************************ 00:13:56.440 17:04:48 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:56.440 * Looking for test storage... 00:13:56.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:56.440 17:04:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.440 17:04:48 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.440 17:04:48 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.440 17:04:48 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.440 17:04:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.440 17:04:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.440 17:04:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.440 17:04:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:56.440 17:04:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:56.440 17:04:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:56.441 17:04:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:56.441 17:04:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:56.441 17:04:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:56.441 17:04:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:56.441 17:04:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:56.441 17:04:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:56.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.441 Waiting for block devices as requested 00:13:56.698 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.698 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.698 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:56.956 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.229 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:02.229 17:04:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:02.229 17:04:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:02.229 17:04:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:02.229 17:04:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:02.229 17:04:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.229 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:02.230 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.231 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.232 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.233 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:02.234 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:02.235 17:04:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:02.235 17:04:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:02.235 17:04:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:02.235 17:04:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.235 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:02.236 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.237 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.238 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:02.239 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:02.240 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:02.241 17:04:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:02.241 17:04:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:02.241 17:04:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:02.241 17:04:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.241 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.242 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:02.243 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:02.244 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.245 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.246 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.247 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.248 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.249 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:02.250 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:02.251 17:04:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:14:02.251 17:04:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:02.251 17:04:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:02.251 17:04:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:02.251 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.252 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.252 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:02.252 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:02.252 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:02.252 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:02.512 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.513 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:02.514 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:02.515 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:14:02.516 17:04:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:14:02.516 17:04:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:14:02.516 17:04:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:03.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:03.644 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:03.644 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:03.644 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:03.644 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:03.644 17:04:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:03.644 17:04:56 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:03.644 17:04:56 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.644 17:04:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:03.644 ************************************ 00:14:03.644 START TEST nvme_simple_copy 00:14:03.644 ************************************ 00:14:03.644 17:04:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:03.900 Initializing NVMe Controllers 00:14:03.900 Attaching to 0000:00:10.0 00:14:03.900 Controller supports SCC. Attached to 0000:00:10.0 00:14:03.901 Namespace ID: 1 size: 6GB 00:14:03.901 Initialization complete. 00:14:03.901 00:14:03.901 Controller QEMU NVMe Ctrl (12340 ) 00:14:03.901 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:14:03.901 Namespace Block Size:4096 00:14:03.901 Writing LBAs 0 to 63 with Random Data 00:14:03.901 Copied LBAs from 0 - 63 to the Destination LBA 256 00:14:03.901 LBAs matching Written Data: 64 00:14:03.901 00:14:03.901 real 0m0.316s 00:14:03.901 user 0m0.134s 00:14:03.901 sys 0m0.081s 00:14:03.901 17:04:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.901 ************************************ 00:14:03.901 END TEST nvme_simple_copy 00:14:03.901 ************************************ 00:14:03.901 17:04:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:14:04.158 00:14:04.158 real 0m8.122s 00:14:04.158 user 0m1.259s 00:14:04.158 sys 0m1.660s 00:14:04.158 17:04:56 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.158 ************************************ 00:14:04.158 END TEST nvme_scc 00:14:04.158 ************************************ 00:14:04.158 17:04:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:04.158 17:04:56 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:14:04.158 17:04:56 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:14:04.158 17:04:56 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:14:04.158 17:04:56 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:14:04.158 17:04:56 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:14:04.158 17:04:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:04.158 17:04:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.158 17:04:56 -- common/autotest_common.sh@10 -- # set +x 00:14:04.158 ************************************ 00:14:04.158 START TEST nvme_fdp 00:14:04.158 ************************************ 00:14:04.158 17:04:56 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:14:04.158 * Looking for test storage... 00:14:04.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:04.158 17:04:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.158 17:04:56 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.158 17:04:56 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.158 17:04:56 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.158 17:04:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.158 17:04:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.158 17:04:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.158 17:04:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:14:04.158 17:04:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:04.158 17:04:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:14:04.158 17:04:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.158 17:04:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:04.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:04.722 Waiting for block devices as requested 00:14:04.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:04.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:04.979 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:04.979 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:10.242 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:10.242 17:05:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:10.242 17:05:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:10.242 17:05:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:10.242 17:05:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:10.242 17:05:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:10.242 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.243 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:10.244 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.245 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:10.246 17:05:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:10.246 17:05:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:10.246 17:05:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:10.247 17:05:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:10.247 17:05:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:10.247 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:10.248 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.249 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.250 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:10.513 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:10.514 17:05:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:10.514 17:05:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:10.514 17:05:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:10.514 17:05:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:10.514 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:10.515 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:10.516 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:10.517 17:05:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.518 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:10.519 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.520 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:10.521 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.522 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:10.523 17:05:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:14:10.523 17:05:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:10.523 17:05:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:10.523 17:05:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.523 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.524 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:10.525 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:10.526 17:05:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:10.527 17:05:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:14:10.527 17:05:02 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:14:10.786 17:05:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:10.786 17:05:02 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:10.787 17:05:02 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:11.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.629 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.629 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.629 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.888 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.888 17:05:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:11.888 17:05:04 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:11.888 17:05:04 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.888 17:05:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:11.888 ************************************ 00:14:11.888 START TEST nvme_flexible_data_placement 00:14:11.888 ************************************ 00:14:11.888 17:05:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:12.146 Initializing NVMe Controllers 00:14:12.146 Attaching to 0000:00:13.0 00:14:12.146 Controller supports FDP Attached to 0000:00:13.0 00:14:12.146 Namespace ID: 1 Endurance Group ID: 1 00:14:12.146 Initialization complete. 00:14:12.146 00:14:12.146 ================================== 00:14:12.146 == FDP tests for Namespace: #01 == 00:14:12.146 ================================== 00:14:12.146 00:14:12.146 Get Feature: FDP: 00:14:12.146 ================= 00:14:12.146 Enabled: Yes 00:14:12.146 FDP configuration Index: 0 00:14:12.146 00:14:12.146 FDP configurations log page 00:14:12.146 =========================== 00:14:12.146 Number of FDP configurations: 1 00:14:12.146 Version: 0 00:14:12.146 Size: 112 00:14:12.146 FDP Configuration Descriptor: 0 00:14:12.146 Descriptor Size: 96 00:14:12.146 Reclaim Group Identifier format: 2 00:14:12.146 FDP Volatile Write Cache: Not Present 00:14:12.146 FDP Configuration: Valid 00:14:12.146 Vendor Specific Size: 0 00:14:12.146 Number of Reclaim Groups: 2 00:14:12.146 Number of Recalim Unit Handles: 8 00:14:12.146 Max Placement Identifiers: 128 00:14:12.146 Number of Namespaces Suppprted: 256 00:14:12.146 Reclaim unit Nominal Size: 6000000 bytes 00:14:12.146 Estimated Reclaim Unit Time Limit: Not Reported 00:14:12.146 RUH Desc #000: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #001: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #002: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #003: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #004: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #005: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #006: RUH Type: Initially Isolated 00:14:12.146 RUH Desc #007: RUH Type: Initially Isolated 00:14:12.146 00:14:12.146 FDP reclaim unit handle usage log page 00:14:12.146 ====================================== 00:14:12.146 Number of Reclaim Unit Handles: 8 00:14:12.146 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:12.146 RUH Usage Desc #001: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #002: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #003: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #004: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #005: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #006: RUH Attributes: Unused 00:14:12.146 RUH Usage Desc #007: RUH Attributes: Unused 00:14:12.146 00:14:12.146 FDP statistics log page 00:14:12.146 ======================= 00:14:12.146 Host bytes with metadata written: 801755136 00:14:12.146 Media bytes with metadata written: 801918976 00:14:12.146 Media bytes erased: 0 00:14:12.146 00:14:12.146 FDP Reclaim unit handle status 00:14:12.146 ============================== 00:14:12.146 Number of RUHS descriptors: 2 00:14:12.146 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000363 00:14:12.146 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:12.146 00:14:12.146 FDP write on placement id: 0 success 00:14:12.146 00:14:12.146 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:12.146 00:14:12.146 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:12.146 00:14:12.146 Get Feature: FDP Events for Placement handle: #0 00:14:12.146 ======================== 00:14:12.146 Number of FDP Events: 6 00:14:12.147 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:12.147 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:12.147 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:12.147 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:12.147 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:12.147 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:12.147 00:14:12.147 FDP events log page 00:14:12.147 =================== 00:14:12.147 Number of FDP events: 1 00:14:12.147 FDP Event #0: 00:14:12.147 Event Type: RU Not Written to Capacity 00:14:12.147 Placement Identifier: Valid 00:14:12.147 NSID: Valid 00:14:12.147 Location: Valid 00:14:12.147 Placement Identifier: 0 00:14:12.147 Event Timestamp: 8 00:14:12.147 Namespace Identifier: 1 00:14:12.147 Reclaim Group Identifier: 0 00:14:12.147 Reclaim Unit Handle Identifier: 0 00:14:12.147 00:14:12.147 FDP test passed 00:14:12.147 00:14:12.147 real 0m0.288s 00:14:12.147 user 0m0.099s 00:14:12.147 sys 0m0.087s 00:14:12.147 ************************************ 00:14:12.147 END TEST nvme_flexible_data_placement 00:14:12.147 ************************************ 00:14:12.147 17:05:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.147 17:05:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 ************************************ 00:14:12.147 END TEST nvme_fdp 00:14:12.147 ************************************ 00:14:12.147 00:14:12.147 real 0m8.116s 00:14:12.147 user 0m1.301s 00:14:12.147 sys 0m1.705s 00:14:12.147 17:05:04 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.147 17:05:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 17:05:04 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:14:12.147 17:05:04 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:12.147 17:05:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:12.147 17:05:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.147 17:05:04 -- common/autotest_common.sh@10 -- # set +x 00:14:12.406 ************************************ 00:14:12.406 START TEST nvme_rpc 00:14:12.406 ************************************ 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:12.406 * Looking for test storage... 00:14:12.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:14:12.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71532 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:12.406 17:05:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71532 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71532 ']' 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.406 17:05:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.665 [2024-07-25 17:05:04.900356] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:12.665 [2024-07-25 17:05:04.900779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71532 ] 00:14:12.665 [2024-07-25 17:05:05.079790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.923 [2024-07-25 17:05:05.368210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.923 [2024-07-25 17:05:05.368210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.884 17:05:06 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.884 17:05:06 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:13.884 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:14.142 Nvme0n1 00:14:14.142 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:14.142 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:14.400 request: 00:14:14.400 { 00:14:14.400 "bdev_name": "Nvme0n1", 00:14:14.400 "filename": "non_existing_file", 00:14:14.400 "method": "bdev_nvme_apply_firmware", 00:14:14.400 "req_id": 1 00:14:14.400 } 00:14:14.400 Got JSON-RPC error response 00:14:14.400 response: 00:14:14.400 { 00:14:14.400 "code": -32603, 00:14:14.400 "message": "open file failed." 00:14:14.400 } 00:14:14.400 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:14.400 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:14.400 17:05:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:14.659 17:05:07 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:14.659 17:05:07 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71532 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71532 ']' 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71532 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71532 00:14:14.659 killing process with pid 71532 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71532' 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71532 00:14:14.659 17:05:07 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71532 00:14:17.187 ************************************ 00:14:17.187 END TEST nvme_rpc 00:14:17.187 ************************************ 00:14:17.187 00:14:17.187 real 0m4.564s 00:14:17.187 user 0m8.425s 00:14:17.187 sys 0m0.745s 00:14:17.187 17:05:09 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.187 17:05:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.187 17:05:09 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:17.187 17:05:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:17.187 17:05:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.187 17:05:09 -- common/autotest_common.sh@10 -- # set +x 00:14:17.187 ************************************ 00:14:17.187 START TEST nvme_rpc_timeouts 00:14:17.187 ************************************ 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:17.187 * Looking for test storage... 00:14:17.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71608 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71608 00:14:17.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71632 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:17.187 17:05:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71632 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71632 ']' 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.187 17:05:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:17.187 [2024-07-25 17:05:09.418050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:17.187 [2024-07-25 17:05:09.418411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71632 ] 00:14:17.187 [2024-07-25 17:05:09.581851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.534 [2024-07-25 17:05:09.816368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.534 [2024-07-25 17:05:09.816377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.492 Checking default timeout settings: 00:14:18.492 17:05:10 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.492 17:05:10 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:14:18.492 17:05:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:18.492 17:05:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:18.751 Making settings changes with rpc: 00:14:18.751 17:05:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:18.751 17:05:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:18.751 Check default vs. modified settings: 00:14:18.751 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:18.751 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 Setting action_on_timeout is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 Setting timeout_us is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:19.318 Setting timeout_admin_us is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71608 /tmp/settings_modified_71608 00:14:19.318 17:05:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71632 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71632 ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71632 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71632 00:14:19.318 killing process with pid 71632 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71632' 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71632 00:14:19.318 17:05:11 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71632 00:14:21.848 RPC TIMEOUT SETTING TEST PASSED. 00:14:21.848 17:05:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:21.848 00:14:21.848 real 0m4.616s 00:14:21.848 user 0m8.677s 00:14:21.848 sys 0m0.701s 00:14:21.848 ************************************ 00:14:21.848 END TEST nvme_rpc_timeouts 00:14:21.848 ************************************ 00:14:21.848 17:05:13 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.848 17:05:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:21.848 17:05:13 -- spdk/autotest.sh@247 -- # uname -s 00:14:21.848 17:05:13 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:14:21.848 17:05:13 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:21.848 17:05:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:21.848 17:05:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.848 17:05:13 -- common/autotest_common.sh@10 -- # set +x 00:14:21.848 ************************************ 00:14:21.848 START TEST sw_hotplug 00:14:21.848 ************************************ 00:14:21.848 17:05:13 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:21.848 * Looking for test storage... 00:14:21.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:21.848 17:05:13 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:22.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.107 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.107 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.107 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.107 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.107 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:22.107 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:22.107 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:22.107 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@230 -- # local class 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:14:22.107 17:05:14 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@15 -- # local i 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:14:22.108 17:05:14 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:22.108 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:22.108 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:22.108 17:05:14 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:22.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.675 Waiting for block devices as requested 00:14:22.934 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.934 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.934 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.934 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:28.202 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:28.202 17:05:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:28.202 17:05:20 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:28.460 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:28.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:28.718 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:28.976 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:29.234 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:29.234 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:29.234 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:29.234 17:05:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72499 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:29.493 17:05:21 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:14:29.493 17:05:21 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:14:29.493 17:05:21 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:14:29.493 17:05:21 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:14:29.493 17:05:21 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:29.493 17:05:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:29.751 Initializing NVMe Controllers 00:14:29.751 Attaching to 0000:00:10.0 00:14:29.751 Attaching to 0000:00:11.0 00:14:29.751 Attached to 0000:00:10.0 00:14:29.751 Attached to 0000:00:11.0 00:14:29.751 Initialization complete. Starting I/O... 00:14:29.751 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:29.751 QEMU NVMe Ctrl (12341 ): 1 I/Os completed (+1) 00:14:29.751 00:14:30.686 QEMU NVMe Ctrl (12340 ): 1216 I/Os completed (+1216) 00:14:30.686 QEMU NVMe Ctrl (12341 ): 1275 I/Os completed (+1274) 00:14:30.686 00:14:31.620 QEMU NVMe Ctrl (12340 ): 2732 I/Os completed (+1516) 00:14:31.620 QEMU NVMe Ctrl (12341 ): 2818 I/Os completed (+1543) 00:14:31.620 00:14:32.580 QEMU NVMe Ctrl (12340 ): 4364 I/Os completed (+1632) 00:14:32.581 QEMU NVMe Ctrl (12341 ): 4515 I/Os completed (+1697) 00:14:32.581 00:14:33.955 QEMU NVMe Ctrl (12340 ): 6152 I/Os completed (+1788) 00:14:33.955 QEMU NVMe Ctrl (12341 ): 6327 I/Os completed (+1812) 00:14:33.955 00:14:34.522 QEMU NVMe Ctrl (12340 ): 7860 I/Os completed (+1708) 00:14:34.522 QEMU NVMe Ctrl (12341 ): 8066 I/Os completed (+1739) 00:14:34.522 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.464 [2024-07-25 17:05:27.730574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:35.464 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:35.464 [2024-07-25 17:05:27.733362] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.733608] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.733812] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.733897] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:35.464 [2024-07-25 17:05:27.737876] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.738134] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.738319] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.738363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.464 [2024-07-25 17:05:27.762426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:35.464 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:35.464 [2024-07-25 17:05:27.764923] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.765161] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.765348] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.765393] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:35.464 [2024-07-25 17:05:27.769079] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.769293] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.769381] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 [2024-07-25 17:05:27.769453] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.464 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:35.730 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:35.730 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.730 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.730 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.730 17:05:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:35.730 Attaching to 0000:00:10.0 00:14:35.730 Attached to 0000:00:10.0 00:14:35.730 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:35.730 00:14:35.730 17:05:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:35.730 17:05:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.730 17:05:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:35.730 Attaching to 0000:00:11.0 00:14:35.730 Attached to 0000:00:11.0 00:14:36.666 QEMU NVMe Ctrl (12340 ): 1832 I/Os completed (+1832) 00:14:36.666 QEMU NVMe Ctrl (12341 ): 1679 I/Os completed (+1679) 00:14:36.666 00:14:37.601 QEMU NVMe Ctrl (12340 ): 3568 I/Os completed (+1736) 00:14:37.601 QEMU NVMe Ctrl (12341 ): 3445 I/Os completed (+1766) 00:14:37.601 00:14:38.537 QEMU NVMe Ctrl (12340 ): 5188 I/Os completed (+1620) 00:14:38.537 QEMU NVMe Ctrl (12341 ): 5103 I/Os completed (+1658) 00:14:38.537 00:14:39.912 QEMU NVMe Ctrl (12340 ): 6664 I/Os completed (+1476) 00:14:39.912 QEMU NVMe Ctrl (12341 ): 6673 I/Os completed (+1570) 00:14:39.912 00:14:40.523 QEMU NVMe Ctrl (12340 ): 8362 I/Os completed (+1698) 00:14:40.523 QEMU NVMe Ctrl (12341 ): 8394 I/Os completed (+1721) 00:14:40.523 00:14:41.893 QEMU NVMe Ctrl (12340 ): 10130 I/Os completed (+1768) 00:14:41.893 QEMU NVMe Ctrl (12341 ): 10200 I/Os completed (+1806) 00:14:41.893 00:14:42.827 QEMU NVMe Ctrl (12340 ): 11706 I/Os completed (+1576) 00:14:42.827 QEMU NVMe Ctrl (12341 ): 11802 I/Os completed (+1602) 00:14:42.827 00:14:43.761 QEMU NVMe Ctrl (12340 ): 13138 I/Os completed (+1432) 00:14:43.761 QEMU NVMe Ctrl (12341 ): 13351 I/Os completed (+1549) 00:14:43.761 00:14:44.695 QEMU NVMe Ctrl (12340 ): 14846 I/Os completed (+1708) 00:14:44.695 QEMU NVMe Ctrl (12341 ): 15079 I/Os completed (+1728) 00:14:44.695 00:14:45.629 QEMU NVMe Ctrl (12340 ): 16514 I/Os completed (+1668) 00:14:45.629 QEMU NVMe Ctrl (12341 ): 16793 I/Os completed (+1714) 00:14:45.629 00:14:46.563 QEMU NVMe Ctrl (12340 ): 18250 I/Os completed (+1736) 00:14:46.563 QEMU NVMe Ctrl (12341 ): 18579 I/Os completed (+1786) 00:14:46.563 00:14:47.938 QEMU NVMe Ctrl (12340 ): 19902 I/Os completed (+1652) 00:14:47.938 QEMU NVMe Ctrl (12341 ): 20282 I/Os completed (+1703) 00:14:47.938 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:47.938 [2024-07-25 17:05:40.083634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:47.938 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:47.938 [2024-07-25 17:05:40.085669] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.085862] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.085936] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.086090] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:47.938 [2024-07-25 17:05:40.089198] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.089382] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.089527] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.089564] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:47.938 [2024-07-25 17:05:40.115501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:47.938 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:47.938 [2024-07-25 17:05:40.117548] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.117654] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.117729] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.117872] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:47.938 [2024-07-25 17:05:40.120682] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.120741] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.120772] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 [2024-07-25 17:05:40.120797] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:47.938 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:47.938 Attaching to 0000:00:10.0 00:14:47.938 Attached to 0000:00:10.0 00:14:48.196 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:48.196 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:48.196 17:05:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:48.196 Attaching to 0000:00:11.0 00:14:48.196 Attached to 0000:00:11.0 00:14:48.762 QEMU NVMe Ctrl (12340 ): 976 I/Os completed (+976) 00:14:48.762 QEMU NVMe Ctrl (12341 ): 908 I/Os completed (+908) 00:14:48.762 00:14:49.697 QEMU NVMe Ctrl (12340 ): 2656 I/Os completed (+1680) 00:14:49.697 QEMU NVMe Ctrl (12341 ): 2637 I/Os completed (+1729) 00:14:49.697 00:14:50.632 QEMU NVMe Ctrl (12340 ): 4460 I/Os completed (+1804) 00:14:50.632 QEMU NVMe Ctrl (12341 ): 4461 I/Os completed (+1824) 00:14:50.632 00:14:51.567 QEMU NVMe Ctrl (12340 ): 6252 I/Os completed (+1792) 00:14:51.567 QEMU NVMe Ctrl (12341 ): 6266 I/Os completed (+1805) 00:14:51.567 00:14:52.943 QEMU NVMe Ctrl (12340 ): 8000 I/Os completed (+1748) 00:14:52.943 QEMU NVMe Ctrl (12341 ): 8043 I/Os completed (+1777) 00:14:52.943 00:14:53.879 QEMU NVMe Ctrl (12340 ): 9764 I/Os completed (+1764) 00:14:53.879 QEMU NVMe Ctrl (12341 ): 9817 I/Os completed (+1774) 00:14:53.879 00:14:54.815 QEMU NVMe Ctrl (12340 ): 11524 I/Os completed (+1760) 00:14:54.815 QEMU NVMe Ctrl (12341 ): 11589 I/Os completed (+1772) 00:14:54.815 00:14:55.750 QEMU NVMe Ctrl (12340 ): 13246 I/Os completed (+1722) 00:14:55.750 QEMU NVMe Ctrl (12341 ): 13323 I/Os completed (+1734) 00:14:55.750 00:14:56.696 QEMU NVMe Ctrl (12340 ): 14930 I/Os completed (+1684) 00:14:56.696 QEMU NVMe Ctrl (12341 ): 15032 I/Os completed (+1709) 00:14:56.696 00:14:57.632 QEMU NVMe Ctrl (12340 ): 16530 I/Os completed (+1600) 00:14:57.632 QEMU NVMe Ctrl (12341 ): 16639 I/Os completed (+1607) 00:14:57.632 00:14:58.567 QEMU NVMe Ctrl (12340 ): 18298 I/Os completed (+1768) 00:14:58.567 QEMU NVMe Ctrl (12341 ): 18412 I/Os completed (+1773) 00:14:58.567 00:14:59.942 QEMU NVMe Ctrl (12340 ): 19946 I/Os completed (+1648) 00:14:59.942 QEMU NVMe Ctrl (12341 ): 20084 I/Os completed (+1672) 00:14:59.942 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:00.201 [2024-07-25 17:05:52.434069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:00.201 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:00.201 [2024-07-25 17:05:52.436253] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.436378] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.436452] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.436663] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:00.201 [2024-07-25 17:05:52.439792] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.439993] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.440140] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.440209] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:00.201 [2024-07-25 17:05:52.466391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:00.201 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:00.201 [2024-07-25 17:05:52.468525] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.468758] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.468841] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.468992] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:00.201 [2024-07-25 17:05:52.471968] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.472186] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.472344] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 [2024-07-25 17:05:52.472480] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:00.201 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:00.201 EAL: Scan for (pci) bus failed. 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:00.201 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:00.459 Attaching to 0000:00:10.0 00:15:00.459 Attached to 0000:00:10.0 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.459 17:05:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:00.459 Attaching to 0000:00:11.0 00:15:00.459 Attached to 0000:00:11.0 00:15:00.459 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:00.459 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:00.459 [2024-07-25 17:05:52.805058] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:12.664 17:06:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:12.664 17:06:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:12.664 17:06:04 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.07 00:15:12.664 17:06:04 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.07 00:15:12.664 17:06:04 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:15:12.664 17:06:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.07 00:15:12.664 17:06:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.07 2 00:15:12.664 remove_attach_helper took 43.07s to complete (handling 2 nvme drive(s)) 17:06:04 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72499 00:15:19.223 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72499) - No such process 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72499 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73039 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:19.223 17:06:10 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73039 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 73039 ']' 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.223 17:06:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:19.223 [2024-07-25 17:06:10.935150] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:19.223 [2024-07-25 17:06:10.935658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:15:19.223 [2024-07-25 17:06:11.109700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.223 [2024-07-25 17:06:11.376984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:19.836 17:06:12 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:19.836 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:19.837 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:19.837 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:19.837 17:06:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.390 [2024-07-25 17:06:18.267222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:26.390 [2024-07-25 17:06:18.270188] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.270250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.270294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.270324] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.270345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.270361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.270381] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.270397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.270430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.270446] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.270466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:26.390 [2024-07-25 17:06:18.667218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:26.390 [2024-07-25 17:06:18.670440] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.670652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.670804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.670847] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.670867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.670886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.670903] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.670922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.670937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 [2024-07-25 17:06:18.670957] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.390 [2024-07-25 17:06:18.670972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.390 [2024-07-25 17:06:18.671025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.390 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.390 17:06:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.649 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:26.649 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:26.649 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.649 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.649 17:06:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:26.649 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:26.649 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:26.649 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.649 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.649 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:26.907 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:26.907 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:26.907 17:06:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.104 [2024-07-25 17:06:31.267604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:39.104 [2024-07-25 17:06:31.270810] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.104 [2024-07-25 17:06:31.271001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.104 [2024-07-25 17:06:31.271115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.104 [2024-07-25 17:06:31.271342] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.104 [2024-07-25 17:06:31.271468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.104 [2024-07-25 17:06:31.271610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.104 [2024-07-25 17:06:31.271823] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.104 [2024-07-25 17:06:31.271965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.104 [2024-07-25 17:06:31.272150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.104 [2024-07-25 17:06:31.272293] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.104 [2024-07-25 17:06:31.272415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.104 [2024-07-25 17:06:31.272554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.104 17:06:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:39.104 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:39.363 [2024-07-25 17:06:31.767626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:39.363 [2024-07-25 17:06:31.770846] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.363 [2024-07-25 17:06:31.771164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.363 [2024-07-25 17:06:31.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.363 [2024-07-25 17:06:31.771632] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.363 [2024-07-25 17:06:31.771770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.363 [2024-07-25 17:06:31.771927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.363 [2024-07-25 17:06:31.772101] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.363 [2024-07-25 17:06:31.772225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.364 [2024-07-25 17:06:31.772390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.364 [2024-07-25 17:06:31.772602] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.364 [2024-07-25 17:06:31.772730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.364 [2024-07-25 17:06:31.772887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.364 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.364 17:06:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.364 17:06:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.623 17:06:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.623 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:39.623 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:39.623 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:39.623 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:39.623 17:06:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:39.623 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:39.623 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:39.623 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:39.623 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:39.623 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:39.882 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:39.882 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:39.882 17:06:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:52.092 [2024-07-25 17:06:44.267892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:52.092 [2024-07-25 17:06:44.272291] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.092 [2024-07-25 17:06:44.272545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.092 [2024-07-25 17:06:44.272833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.092 [2024-07-25 17:06:44.272873] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.092 [2024-07-25 17:06:44.272900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.092 [2024-07-25 17:06:44.272918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.092 [2024-07-25 17:06:44.272947] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.092 [2024-07-25 17:06:44.272963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.092 [2024-07-25 17:06:44.272998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.092 [2024-07-25 17:06:44.273020] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.092 [2024-07-25 17:06:44.273042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.092 [2024-07-25 17:06:44.273059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.092 17:06:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:52.092 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:52.351 [2024-07-25 17:06:44.667923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:15:52.351 [2024-07-25 17:06:44.671707] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.351 [2024-07-25 17:06:44.671784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.351 [2024-07-25 17:06:44.671810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.351 [2024-07-25 17:06:44.671845] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.351 [2024-07-25 17:06:44.671863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.351 [2024-07-25 17:06:44.671885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.351 [2024-07-25 17:06:44.671903] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.351 [2024-07-25 17:06:44.671924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.351 [2024-07-25 17:06:44.671941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.352 [2024-07-25 17:06:44.671971] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.352 [2024-07-25 17:06:44.672016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.352 [2024-07-25 17:06:44.672039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.610 17:06:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.610 17:06:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.610 17:06:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:52.610 17:06:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:52.610 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:52.610 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:52.610 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:52.610 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:52.869 17:06:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.07 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.07 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.07 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.07 2 00:16:05.125 remove_attach_helper took 45.07s to complete (handling 2 nvme drive(s)) 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:16:05.125 17:06:57 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:05.125 17:06:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:11.686 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:11.686 17:07:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.686 17:07:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:11.686 17:07:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.686 [2024-07-25 17:07:03.367117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:11.686 [2024-07-25 17:07:03.369159] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.686 [2024-07-25 17:07:03.369390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.686 [2024-07-25 17:07:03.369542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.686 [2024-07-25 17:07:03.369718] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.686 [2024-07-25 17:07:03.369900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.370052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 [2024-07-25 17:07:03.370134] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.370254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.370434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 [2024-07-25 17:07:03.370659] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.370836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.370999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:11.687 [2024-07-25 17:07:03.767121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:11.687 [2024-07-25 17:07:03.768932] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.769182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.769349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 [2024-07-25 17:07:03.769577] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.769607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.769628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 [2024-07-25 17:07:03.769644] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.769661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.769676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 [2024-07-25 17:07:03.769695] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:11.687 [2024-07-25 17:07:03.769709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.687 [2024-07-25 17:07:03.769725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:11.687 17:07:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.687 17:07:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:11.687 17:07:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:11.687 17:07:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:11.687 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:11.946 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:11.946 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:11.946 17:07:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:24.147 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:24.147 17:07:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.147 17:07:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.147 17:07:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:24.148 17:07:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.148 17:07:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 [2024-07-25 17:07:16.367278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:24.148 [2024-07-25 17:07:16.369308] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.148 [2024-07-25 17:07:16.369538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.148 [2024-07-25 17:07:16.369693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.148 [2024-07-25 17:07:16.369847] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.148 [2024-07-25 17:07:16.369971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.148 [2024-07-25 17:07:16.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.148 [2024-07-25 17:07:16.370330] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.148 [2024-07-25 17:07:16.370446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.148 [2024-07-25 17:07:16.370594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.148 [2024-07-25 17:07:16.370781] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.148 [2024-07-25 17:07:16.370911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.148 [2024-07-25 17:07:16.371079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.148 17:07:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:24.148 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:24.407 [2024-07-25 17:07:16.767305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:24.407 [2024-07-25 17:07:16.769718] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.407 [2024-07-25 17:07:16.769917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.407 [2024-07-25 17:07:16.770177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.407 [2024-07-25 17:07:16.770412] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.407 [2024-07-25 17:07:16.770539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.407 [2024-07-25 17:07:16.770773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.407 [2024-07-25 17:07:16.771051] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.407 [2024-07-25 17:07:16.771188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.407 [2024-07-25 17:07:16.771330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.407 [2024-07-25 17:07:16.771495] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.407 [2024-07-25 17:07:16.771786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.407 [2024-07-25 17:07:16.771948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:24.665 17:07:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.665 17:07:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.665 17:07:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:24.665 17:07:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:24.665 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:24.665 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:24.665 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:24.924 17:07:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:37.179 17:07:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.179 17:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:37.179 17:07:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:37.179 [2024-07-25 17:07:29.367493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:16:37.179 [2024-07-25 17:07:29.369873] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.179 [2024-07-25 17:07:29.370095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.179 [2024-07-25 17:07:29.370263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.179 [2024-07-25 17:07:29.370553] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:37.179 [2024-07-25 17:07:29.370828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.179 [2024-07-25 17:07:29.370854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.179 [2024-07-25 17:07:29.370881] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.179 [2024-07-25 17:07:29.370898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.179 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:37.180 [2024-07-25 17:07:29.370920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.180 [2024-07-25 17:07:29.370946] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.180 [2024-07-25 17:07:29.370972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.180 [2024-07-25 17:07:29.371011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:37.180 17:07:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.180 17:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:37.180 17:07:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:37.180 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:37.438 [2024-07-25 17:07:29.767478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:16:37.438 [2024-07-25 17:07:29.769273] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.438 [2024-07-25 17:07:29.769338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.438 [2024-07-25 17:07:29.769361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.438 [2024-07-25 17:07:29.769386] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.438 [2024-07-25 17:07:29.769401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.438 [2024-07-25 17:07:29.769418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.438 [2024-07-25 17:07:29.769434] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.438 [2024-07-25 17:07:29.769450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.438 [2024-07-25 17:07:29.769479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.438 [2024-07-25 17:07:29.769496] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.438 [2024-07-25 17:07:29.769510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.438 [2024-07-25 17:07:29.769529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:37.696 17:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:37.696 17:07:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.696 17:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:37.696 17:07:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.696 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:37.696 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:37.696 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:37.696 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:37.696 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:37.954 17:07:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.06 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.06 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.06 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.06 2 00:16:50.196 remove_attach_helper took 45.06s to complete (handling 2 nvme drive(s)) 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:50.196 17:07:42 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73039 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 73039 ']' 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 73039 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73039 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73039' 00:16:50.196 killing process with pid 73039 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@969 -- # kill 73039 00:16:50.196 17:07:42 sw_hotplug -- common/autotest_common.sh@974 -- # wait 73039 00:16:52.102 17:07:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.928 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.928 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:53.186 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:53.186 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:53.186 00:16:53.186 real 2m31.659s 00:16:53.186 user 1m52.610s 00:16:53.186 sys 0m18.812s 00:16:53.186 ************************************ 00:16:53.186 END TEST sw_hotplug 00:16:53.186 ************************************ 00:16:53.186 17:07:45 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.186 17:07:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.186 17:07:45 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:16:53.186 17:07:45 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:53.186 17:07:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:53.186 17:07:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.186 17:07:45 -- common/autotest_common.sh@10 -- # set +x 00:16:53.186 ************************************ 00:16:53.186 START TEST nvme_xnvme 00:16:53.186 ************************************ 00:16:53.186 17:07:45 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:53.445 * Looking for test storage... 00:16:53.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:53.445 17:07:45 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.445 17:07:45 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.445 17:07:45 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.445 17:07:45 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.445 17:07:45 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.445 17:07:45 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.445 17:07:45 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.445 17:07:45 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:53.445 17:07:45 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.445 17:07:45 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:16:53.445 17:07:45 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:53.445 17:07:45 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.445 17:07:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.445 ************************************ 00:16:53.445 START TEST xnvme_to_malloc_dd_copy 00:16:53.445 ************************************ 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:53.446 17:07:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:53.446 { 00:16:53.446 "subsystems": [ 00:16:53.446 { 00:16:53.446 "subsystem": "bdev", 00:16:53.446 "config": [ 00:16:53.446 { 00:16:53.446 "params": { 00:16:53.446 "block_size": 512, 00:16:53.446 "num_blocks": 2097152, 00:16:53.446 "name": "malloc0" 00:16:53.446 }, 00:16:53.446 "method": "bdev_malloc_create" 00:16:53.446 }, 00:16:53.446 { 00:16:53.446 "params": { 00:16:53.446 "io_mechanism": "libaio", 00:16:53.446 "filename": "/dev/nullb0", 00:16:53.446 "name": "null0" 00:16:53.446 }, 00:16:53.446 "method": "bdev_xnvme_create" 00:16:53.446 }, 00:16:53.446 { 00:16:53.446 "method": "bdev_wait_for_examine" 00:16:53.446 } 00:16:53.446 ] 00:16:53.446 } 00:16:53.446 ] 00:16:53.446 } 00:16:53.446 [2024-07-25 17:07:45.879133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:53.446 [2024-07-25 17:07:45.880198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74382 ] 00:16:53.704 [2024-07-25 17:07:46.057668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.962 [2024-07-25 17:07:46.333496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.571  Copying: 202/1024 [MB] (202 MBps) Copying: 412/1024 [MB] (209 MBps) Copying: 618/1024 [MB] (206 MBps) Copying: 815/1024 [MB] (196 MBps) Copying: 1015/1024 [MB] (199 MBps) Copying: 1024/1024 [MB] (average 203 MBps) 00:17:03.571 00:17:03.571 17:07:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:17:03.571 17:07:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:17:03.571 17:07:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:03.571 17:07:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:03.571 { 00:17:03.571 "subsystems": [ 00:17:03.571 { 00:17:03.571 "subsystem": "bdev", 00:17:03.571 "config": [ 00:17:03.571 { 00:17:03.571 "params": { 00:17:03.571 "block_size": 512, 00:17:03.571 "num_blocks": 2097152, 00:17:03.571 "name": "malloc0" 00:17:03.571 }, 00:17:03.571 "method": "bdev_malloc_create" 00:17:03.571 }, 00:17:03.571 { 00:17:03.571 "params": { 00:17:03.571 "io_mechanism": "libaio", 00:17:03.571 "filename": "/dev/nullb0", 00:17:03.571 "name": "null0" 00:17:03.571 }, 00:17:03.571 "method": "bdev_xnvme_create" 00:17:03.571 }, 00:17:03.571 { 00:17:03.571 "method": "bdev_wait_for_examine" 00:17:03.571 } 00:17:03.571 ] 00:17:03.571 } 00:17:03.571 ] 00:17:03.571 } 00:17:03.571 [2024-07-25 17:07:55.950412] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:03.572 [2024-07-25 17:07:55.950779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74498 ] 00:17:03.831 [2024-07-25 17:07:56.111361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.090 [2024-07-25 17:07:56.306922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.405  Copying: 206/1024 [MB] (206 MBps) Copying: 412/1024 [MB] (206 MBps) Copying: 627/1024 [MB] (214 MBps) Copying: 846/1024 [MB] (218 MBps) Copying: 1024/1024 [MB] (average 211 MBps) 00:17:13.405 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:13.405 17:08:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:13.405 { 00:17:13.405 "subsystems": [ 00:17:13.405 { 00:17:13.405 "subsystem": "bdev", 00:17:13.405 "config": [ 00:17:13.405 { 00:17:13.405 "params": { 00:17:13.405 "block_size": 512, 00:17:13.405 "num_blocks": 2097152, 00:17:13.405 "name": "malloc0" 00:17:13.405 }, 00:17:13.405 "method": "bdev_malloc_create" 00:17:13.405 }, 00:17:13.405 { 00:17:13.405 "params": { 00:17:13.405 "io_mechanism": "io_uring", 00:17:13.405 "filename": "/dev/nullb0", 00:17:13.405 "name": "null0" 00:17:13.405 }, 00:17:13.405 "method": "bdev_xnvme_create" 00:17:13.405 }, 00:17:13.405 { 00:17:13.405 "method": "bdev_wait_for_examine" 00:17:13.405 } 00:17:13.405 ] 00:17:13.405 } 00:17:13.405 ] 00:17:13.405 } 00:17:13.405 [2024-07-25 17:08:05.723144] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:13.405 [2024-07-25 17:08:05.723496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:17:13.663 [2024-07-25 17:08:05.897169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.663 [2024-07-25 17:08:06.093403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.248  Copying: 208/1024 [MB] (208 MBps) Copying: 419/1024 [MB] (210 MBps) Copying: 633/1024 [MB] (214 MBps) Copying: 851/1024 [MB] (217 MBps) Copying: 1024/1024 [MB] (average 213 MBps) 00:17:23.248 00:17:23.248 17:08:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:17:23.248 17:08:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:17:23.248 17:08:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:23.248 17:08:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:23.248 { 00:17:23.248 "subsystems": [ 00:17:23.248 { 00:17:23.248 "subsystem": "bdev", 00:17:23.248 "config": [ 00:17:23.248 { 00:17:23.248 "params": { 00:17:23.248 "block_size": 512, 00:17:23.248 "num_blocks": 2097152, 00:17:23.248 "name": "malloc0" 00:17:23.248 }, 00:17:23.248 "method": "bdev_malloc_create" 00:17:23.248 }, 00:17:23.248 { 00:17:23.248 "params": { 00:17:23.248 "io_mechanism": "io_uring", 00:17:23.248 "filename": "/dev/nullb0", 00:17:23.248 "name": "null0" 00:17:23.248 }, 00:17:23.248 "method": "bdev_xnvme_create" 00:17:23.248 }, 00:17:23.248 { 00:17:23.248 "method": "bdev_wait_for_examine" 00:17:23.248 } 00:17:23.248 ] 00:17:23.248 } 00:17:23.248 ] 00:17:23.248 } 00:17:23.248 [2024-07-25 17:08:15.459685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:23.248 [2024-07-25 17:08:15.459888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74719 ] 00:17:23.248 [2024-07-25 17:08:15.633797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.507 [2024-07-25 17:08:15.829182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.508  Copying: 232/1024 [MB] (232 MBps) Copying: 462/1024 [MB] (230 MBps) Copying: 689/1024 [MB] (227 MBps) Copying: 922/1024 [MB] (232 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:17:32.508 00:17:32.508 17:08:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:17:32.508 17:08:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:17:32.508 ************************************ 00:17:32.508 END TEST xnvme_to_malloc_dd_copy 00:17:32.508 ************************************ 00:17:32.508 00:17:32.508 real 0m39.053s 00:17:32.508 user 0m33.359s 00:17:32.508 sys 0m5.165s 00:17:32.508 17:08:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.508 17:08:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:32.508 17:08:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:32.508 17:08:24 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:32.508 17:08:24 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.508 17:08:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.508 ************************************ 00:17:32.508 START TEST xnvme_bdevperf 00:17:32.508 ************************************ 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:32.508 17:08:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.508 { 00:17:32.508 "subsystems": [ 00:17:32.508 { 00:17:32.508 "subsystem": "bdev", 00:17:32.508 "config": [ 00:17:32.508 { 00:17:32.508 "params": { 00:17:32.508 "io_mechanism": "libaio", 00:17:32.508 "filename": "/dev/nullb0", 00:17:32.508 "name": "null0" 00:17:32.508 }, 00:17:32.508 "method": "bdev_xnvme_create" 00:17:32.508 }, 00:17:32.508 { 00:17:32.508 "method": "bdev_wait_for_examine" 00:17:32.508 } 00:17:32.508 ] 00:17:32.508 } 00:17:32.508 ] 00:17:32.508 } 00:17:32.508 [2024-07-25 17:08:24.941614] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:32.508 [2024-07-25 17:08:24.941794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74846 ] 00:17:32.767 [2024-07-25 17:08:25.117676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.026 [2024-07-25 17:08:25.326681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.284 Running I/O for 5 seconds... 00:17:38.551 00:17:38.551 Latency(us) 00:17:38.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.551 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:38.551 null0 : 5.00 144023.40 562.59 0.00 0.00 441.59 127.53 826.65 00:17:38.551 =================================================================================================================== 00:17:38.551 Total : 144023.40 562.59 0.00 0.00 441.59 127.53 826.65 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.486 17:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.486 { 00:17:39.486 "subsystems": [ 00:17:39.486 { 00:17:39.486 "subsystem": "bdev", 00:17:39.486 "config": [ 00:17:39.486 { 00:17:39.486 "params": { 00:17:39.486 "io_mechanism": "io_uring", 00:17:39.486 "filename": "/dev/nullb0", 00:17:39.486 "name": "null0" 00:17:39.486 }, 00:17:39.486 "method": "bdev_xnvme_create" 00:17:39.486 }, 00:17:39.486 { 00:17:39.486 "method": "bdev_wait_for_examine" 00:17:39.486 } 00:17:39.486 ] 00:17:39.486 } 00:17:39.486 ] 00:17:39.486 } 00:17:39.486 [2024-07-25 17:08:31.767211] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:39.486 [2024-07-25 17:08:31.767388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74926 ] 00:17:39.486 [2024-07-25 17:08:31.940486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.745 [2024-07-25 17:08:32.141851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.003 Running I/O for 5 seconds... 00:17:45.284 00:17:45.284 Latency(us) 00:17:45.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.284 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:45.284 null0 : 5.00 190363.44 743.61 0.00 0.00 333.52 202.94 595.78 00:17:45.284 =================================================================================================================== 00:17:45.284 Total : 190363.44 743.61 0.00 0.00 333.52 202.94 595.78 00:17:46.221 17:08:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:17:46.221 17:08:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:17:46.221 ************************************ 00:17:46.221 END TEST xnvme_bdevperf 00:17:46.221 ************************************ 00:17:46.221 00:17:46.221 real 0m13.674s 00:17:46.221 user 0m10.642s 00:17:46.221 sys 0m2.808s 00:17:46.221 17:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.221 17:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:46.221 ************************************ 00:17:46.221 END TEST nvme_xnvme 00:17:46.221 ************************************ 00:17:46.221 00:17:46.221 real 0m52.916s 00:17:46.221 user 0m44.070s 00:17:46.221 sys 0m8.083s 00:17:46.221 17:08:38 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.221 17:08:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.221 17:08:38 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:46.221 17:08:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:46.221 17:08:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.221 17:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:46.221 ************************************ 00:17:46.221 START TEST blockdev_xnvme 00:17:46.221 ************************************ 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:46.221 * Looking for test storage... 00:17:46.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75060 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75060 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 75060 ']' 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.221 17:08:38 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.221 17:08:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.480 [2024-07-25 17:08:38.808853] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:46.480 [2024-07-25 17:08:38.809096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75060 ] 00:17:46.739 [2024-07-25 17:08:38.981254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.739 [2024-07-25 17:08:39.177793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.675 17:08:39 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.675 17:08:39 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:17:47.675 17:08:39 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:47.675 17:08:39 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:17:47.675 17:08:39 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:47.675 17:08:39 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:47.675 17:08:39 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:47.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.933 Waiting for block devices as requested 00:17:48.192 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.192 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.192 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.451 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:53.759 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:53.759 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:53.759 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:17:53.760 nvme0n1 00:17:53.760 nvme1n1 00:17:53.760 nvme2n1 00:17:53.760 nvme2n2 00:17:53.760 nvme2n3 00:17:53.760 nvme3n1 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:53.760 17:08:45 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.760 17:08:45 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e87c71d8-1f11-4f0f-84f7-822ca4feabbe"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e87c71d8-1f11-4f0f-84f7-822ca4feabbe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "92d2c917-2139-4a39-834a-908241a548dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "92d2c917-2139-4a39-834a-908241a548dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8ae2f9c0-5421-4796-9e98-c391b050c1c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ae2f9c0-5421-4796-9e98-c391b050c1c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6c2ad450-d461-4a81-9555-d02551c55baf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c2ad450-d461-4a81-9555-d02551c55baf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "86af8385-5429-4d1a-9be5-bae9c595c547"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "86af8385-5429-4d1a-9be5-bae9c595c547",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "eb23485e-7250-49d6-a39a-08753604948d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "eb23485e-7250-49d6-a39a-08753604948d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:53.760 17:08:46 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 75060 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 75060 ']' 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 75060 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75060 00:17:53.760 killing process with pid 75060 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75060' 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 75060 00:17:53.760 17:08:46 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 75060 00:17:55.659 17:08:47 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:55.659 17:08:47 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:55.659 17:08:47 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:55.659 17:08:47 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.659 17:08:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:55.659 ************************************ 00:17:55.659 START TEST bdev_hello_world 00:17:55.659 ************************************ 00:17:55.659 17:08:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:55.659 [2024-07-25 17:08:48.095456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:55.659 [2024-07-25 17:08:48.095637] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75430 ] 00:17:55.917 [2024-07-25 17:08:48.271245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.175 [2024-07-25 17:08:48.465444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.434 [2024-07-25 17:08:48.858085] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:56.434 [2024-07-25 17:08:48.858140] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:56.434 [2024-07-25 17:08:48.858178] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:56.434 [2024-07-25 17:08:48.860471] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:56.434 [2024-07-25 17:08:48.860769] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:56.434 [2024-07-25 17:08:48.860795] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:56.434 [2024-07-25 17:08:48.861052] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:56.434 00:17:56.434 [2024-07-25 17:08:48.861079] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:57.806 00:17:57.806 real 0m1.890s 00:17:57.806 user 0m1.495s 00:17:57.806 sys 0m0.281s 00:17:57.806 17:08:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.806 17:08:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:57.806 ************************************ 00:17:57.806 END TEST bdev_hello_world 00:17:57.806 ************************************ 00:17:57.806 17:08:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:57.806 17:08:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.806 17:08:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.806 17:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:57.806 ************************************ 00:17:57.806 START TEST bdev_bounds 00:17:57.806 ************************************ 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:57.806 Process bdevio pid: 75466 00:17:57.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75466 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75466' 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75466 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75466 ']' 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.806 17:08:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:57.806 [2024-07-25 17:08:50.045900] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:57.806 [2024-07-25 17:08:50.046423] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75466 ] 00:17:57.806 [2024-07-25 17:08:50.215756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:58.064 [2024-07-25 17:08:50.404379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.064 [2024-07-25 17:08:50.404506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.064 [2024-07-25 17:08:50.404533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.629 17:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.629 17:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:58.630 17:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:58.630 I/O targets: 00:17:58.630 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:58.630 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:58.630 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:58.630 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:58.630 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:58.630 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:58.630 00:17:58.630 00:17:58.630 CUnit - A unit testing framework for C - Version 2.1-3 00:17:58.630 http://cunit.sourceforge.net/ 00:17:58.630 00:17:58.630 00:17:58.630 Suite: bdevio tests on: nvme3n1 00:17:58.630 Test: blockdev write read block ...passed 00:17:58.630 Test: blockdev write zeroes read block ...passed 00:17:58.630 Test: blockdev write zeroes read no split ...passed 00:17:58.630 Test: blockdev write zeroes read split ...passed 00:17:58.630 Test: blockdev write zeroes read split partial ...passed 00:17:58.630 Test: blockdev reset ...passed 00:17:58.630 Test: blockdev write read 8 blocks ...passed 00:17:58.630 Test: blockdev write read size > 128k ...passed 00:17:58.630 Test: blockdev write read invalid size ...passed 00:17:58.630 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.630 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.630 Test: blockdev write read max offset ...passed 00:17:58.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.630 Test: blockdev writev readv 8 blocks ...passed 00:17:58.630 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.630 Test: blockdev writev readv block ...passed 00:17:58.888 Test: blockdev writev readv size > 128k ...passed 00:17:58.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.888 Test: blockdev comparev and writev ...passed 00:17:58.888 Test: blockdev nvme passthru rw ...passed 00:17:58.888 Test: blockdev nvme passthru vendor specific ...passed 00:17:58.888 Test: blockdev nvme admin passthru ...passed 00:17:58.888 Test: blockdev copy ...passed 00:17:58.888 Suite: bdevio tests on: nvme2n3 00:17:58.888 Test: blockdev write read block ...passed 00:17:58.888 Test: blockdev write zeroes read block ...passed 00:17:58.888 Test: blockdev write zeroes read no split ...passed 00:17:58.888 Test: blockdev write zeroes read split ...passed 00:17:58.888 Test: blockdev write zeroes read split partial ...passed 00:17:58.888 Test: blockdev reset ...passed 00:17:58.888 Test: blockdev write read 8 blocks ...passed 00:17:58.888 Test: blockdev write read size > 128k ...passed 00:17:58.888 Test: blockdev write read invalid size ...passed 00:17:58.888 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.888 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.888 Test: blockdev write read max offset ...passed 00:17:58.888 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.888 Test: blockdev writev readv 8 blocks ...passed 00:17:58.888 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.888 Test: blockdev writev readv block ...passed 00:17:58.888 Test: blockdev writev readv size > 128k ...passed 00:17:58.888 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.889 Test: blockdev comparev and writev ...passed 00:17:58.889 Test: blockdev nvme passthru rw ...passed 00:17:58.889 Test: blockdev nvme passthru vendor specific ...passed 00:17:58.889 Test: blockdev nvme admin passthru ...passed 00:17:58.889 Test: blockdev copy ...passed 00:17:58.889 Suite: bdevio tests on: nvme2n2 00:17:58.889 Test: blockdev write read block ...passed 00:17:58.889 Test: blockdev write zeroes read block ...passed 00:17:58.889 Test: blockdev write zeroes read no split ...passed 00:17:58.889 Test: blockdev write zeroes read split ...passed 00:17:58.889 Test: blockdev write zeroes read split partial ...passed 00:17:58.889 Test: blockdev reset ...passed 00:17:58.889 Test: blockdev write read 8 blocks ...passed 00:17:58.889 Test: blockdev write read size > 128k ...passed 00:17:58.889 Test: blockdev write read invalid size ...passed 00:17:58.889 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.889 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.889 Test: blockdev write read max offset ...passed 00:17:58.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.889 Test: blockdev writev readv 8 blocks ...passed 00:17:58.889 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.889 Test: blockdev writev readv block ...passed 00:17:58.889 Test: blockdev writev readv size > 128k ...passed 00:17:58.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.889 Test: blockdev comparev and writev ...passed 00:17:58.889 Test: blockdev nvme passthru rw ...passed 00:17:58.889 Test: blockdev nvme passthru vendor specific ...passed 00:17:58.889 Test: blockdev nvme admin passthru ...passed 00:17:58.889 Test: blockdev copy ...passed 00:17:58.889 Suite: bdevio tests on: nvme2n1 00:17:58.889 Test: blockdev write read block ...passed 00:17:58.889 Test: blockdev write zeroes read block ...passed 00:17:58.889 Test: blockdev write zeroes read no split ...passed 00:17:58.889 Test: blockdev write zeroes read split ...passed 00:17:58.889 Test: blockdev write zeroes read split partial ...passed 00:17:58.889 Test: blockdev reset ...passed 00:17:58.889 Test: blockdev write read 8 blocks ...passed 00:17:58.889 Test: blockdev write read size > 128k ...passed 00:17:58.889 Test: blockdev write read invalid size ...passed 00:17:58.889 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.889 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.889 Test: blockdev write read max offset ...passed 00:17:58.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.889 Test: blockdev writev readv 8 blocks ...passed 00:17:58.889 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.889 Test: blockdev writev readv block ...passed 00:17:58.889 Test: blockdev writev readv size > 128k ...passed 00:17:58.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.889 Test: blockdev comparev and writev ...passed 00:17:58.889 Test: blockdev nvme passthru rw ...passed 00:17:58.889 Test: blockdev nvme passthru vendor specific ...passed 00:17:58.889 Test: blockdev nvme admin passthru ...passed 00:17:58.889 Test: blockdev copy ...passed 00:17:58.889 Suite: bdevio tests on: nvme1n1 00:17:58.889 Test: blockdev write read block ...passed 00:17:58.889 Test: blockdev write zeroes read block ...passed 00:17:58.889 Test: blockdev write zeroes read no split ...passed 00:17:58.889 Test: blockdev write zeroes read split ...passed 00:17:58.889 Test: blockdev write zeroes read split partial ...passed 00:17:58.889 Test: blockdev reset ...passed 00:17:58.889 Test: blockdev write read 8 blocks ...passed 00:17:58.889 Test: blockdev write read size > 128k ...passed 00:17:58.889 Test: blockdev write read invalid size ...passed 00:17:58.889 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:58.889 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:58.889 Test: blockdev write read max offset ...passed 00:17:58.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:58.889 Test: blockdev writev readv 8 blocks ...passed 00:17:58.889 Test: blockdev writev readv 30 x 1block ...passed 00:17:58.889 Test: blockdev writev readv block ...passed 00:17:58.889 Test: blockdev writev readv size > 128k ...passed 00:17:58.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:58.889 Test: blockdev comparev and writev ...passed 00:17:58.889 Test: blockdev nvme passthru rw ...passed 00:17:58.889 Test: blockdev nvme passthru vendor specific ...passed 00:17:58.889 Test: blockdev nvme admin passthru ...passed 00:17:58.889 Test: blockdev copy ...passed 00:17:58.889 Suite: bdevio tests on: nvme0n1 00:17:58.889 Test: blockdev write read block ...passed 00:17:58.889 Test: blockdev write zeroes read block ...passed 00:17:58.889 Test: blockdev write zeroes read no split ...passed 00:17:59.148 Test: blockdev write zeroes read split ...passed 00:17:59.148 Test: blockdev write zeroes read split partial ...passed 00:17:59.148 Test: blockdev reset ...passed 00:17:59.148 Test: blockdev write read 8 blocks ...passed 00:17:59.148 Test: blockdev write read size > 128k ...passed 00:17:59.148 Test: blockdev write read invalid size ...passed 00:17:59.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:59.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:59.148 Test: blockdev write read max offset ...passed 00:17:59.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:59.148 Test: blockdev writev readv 8 blocks ...passed 00:17:59.148 Test: blockdev writev readv 30 x 1block ...passed 00:17:59.148 Test: blockdev writev readv block ...passed 00:17:59.148 Test: blockdev writev readv size > 128k ...passed 00:17:59.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:59.148 Test: blockdev comparev and writev ...passed 00:17:59.148 Test: blockdev nvme passthru rw ...passed 00:17:59.148 Test: blockdev nvme passthru vendor specific ...passed 00:17:59.148 Test: blockdev nvme admin passthru ...passed 00:17:59.148 Test: blockdev copy ...passed 00:17:59.148 00:17:59.148 Run Summary: Type Total Ran Passed Failed Inactive 00:17:59.148 suites 6 6 n/a 0 0 00:17:59.148 tests 138 138 138 0 0 00:17:59.148 asserts 780 780 780 0 n/a 00:17:59.148 00:17:59.148 Elapsed time = 0.998 seconds 00:17:59.148 0 00:17:59.148 17:08:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75466 00:17:59.148 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75466 ']' 00:17:59.148 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75466 00:17:59.148 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:59.148 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75466 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75466' 00:17:59.149 killing process with pid 75466 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75466 00:17:59.149 17:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75466 00:18:00.084 17:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:00.084 00:18:00.084 real 0m2.553s 00:18:00.084 user 0m5.949s 00:18:00.084 sys 0m0.423s 00:18:00.084 17:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.084 ************************************ 00:18:00.084 END TEST bdev_bounds 00:18:00.084 ************************************ 00:18:00.084 17:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 17:08:52 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:00.084 17:08:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:00.084 17:08:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.084 17:08:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 ************************************ 00:18:00.084 START TEST bdev_nbd 00:18:00.084 ************************************ 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:00.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75526 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:00.084 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75526 /var/tmp/spdk-nbd.sock 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75526 ']' 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.085 17:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:00.343 [2024-07-25 17:08:52.651164] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:00.343 [2024-07-25 17:08:52.651337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.343 [2024-07-25 17:08:52.808813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.601 [2024-07-25 17:08:53.040401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:01.167 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.426 1+0 records in 00:18:01.426 1+0 records out 00:18:01.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638708 s, 6.4 MB/s 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:01.426 17:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.684 1+0 records in 00:18:01.684 1+0 records out 00:18:01.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814646 s, 5.0 MB/s 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:01.684 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.943 1+0 records in 00:18:01.943 1+0 records out 00:18:01.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734209 s, 5.6 MB/s 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:01.943 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:18:02.201 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.202 1+0 records in 00:18:02.202 1+0 records out 00:18:02.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679326 s, 6.0 MB/s 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.202 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.461 1+0 records in 00:18:02.461 1+0 records out 00:18:02.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905291 s, 4.5 MB/s 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.461 17:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.720 1+0 records in 00:18:02.720 1+0 records out 00:18:02.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106291 s, 3.9 MB/s 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.720 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd0", 00:18:02.979 "bdev_name": "nvme0n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd1", 00:18:02.979 "bdev_name": "nvme1n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd2", 00:18:02.979 "bdev_name": "nvme2n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd3", 00:18:02.979 "bdev_name": "nvme2n2" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd4", 00:18:02.979 "bdev_name": "nvme2n3" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd5", 00:18:02.979 "bdev_name": "nvme3n1" 00:18:02.979 } 00:18:02.979 ]' 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd0", 00:18:02.979 "bdev_name": "nvme0n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd1", 00:18:02.979 "bdev_name": "nvme1n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd2", 00:18:02.979 "bdev_name": "nvme2n1" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd3", 00:18:02.979 "bdev_name": "nvme2n2" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd4", 00:18:02.979 "bdev_name": "nvme2n3" 00:18:02.979 }, 00:18:02.979 { 00:18:02.979 "nbd_device": "/dev/nbd5", 00:18:02.979 "bdev_name": "nvme3n1" 00:18:02.979 } 00:18:02.979 ]' 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.979 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:03.238 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.238 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.238 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.239 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.497 17:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.756 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.015 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.274 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:04.532 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:04.532 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:04.532 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:04.532 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.533 17:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:04.791 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:04.792 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:05.050 /dev/nbd0 00:18:05.050 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:05.050 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:05.050 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:05.050 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.051 1+0 records in 00:18:05.051 1+0 records out 00:18:05.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047933 s, 8.5 MB/s 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:05.051 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:18:05.309 /dev/nbd1 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.309 1+0 records in 00:18:05.309 1+0 records out 00:18:05.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120259 s, 3.4 MB/s 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:05.309 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:18:05.568 /dev/nbd10 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.568 1+0 records in 00:18:05.568 1+0 records out 00:18:05.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054543 s, 7.5 MB/s 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:05.568 17:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:18:05.827 /dev/nbd11 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.827 1+0 records in 00:18:05.827 1+0 records out 00:18:05.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714797 s, 5.7 MB/s 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:05.827 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:18:06.086 /dev/nbd12 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.086 1+0 records in 00:18:06.086 1+0 records out 00:18:06.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772024 s, 5.3 MB/s 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:06.086 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:06.345 /dev/nbd13 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.345 1+0 records in 00:18:06.345 1+0 records out 00:18:06.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000835392 s, 4.9 MB/s 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.345 17:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:06.604 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd0", 00:18:06.604 "bdev_name": "nvme0n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd1", 00:18:06.604 "bdev_name": "nvme1n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd10", 00:18:06.604 "bdev_name": "nvme2n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd11", 00:18:06.604 "bdev_name": "nvme2n2" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd12", 00:18:06.604 "bdev_name": "nvme2n3" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd13", 00:18:06.604 "bdev_name": "nvme3n1" 00:18:06.604 } 00:18:06.604 ]' 00:18:06.604 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd0", 00:18:06.604 "bdev_name": "nvme0n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd1", 00:18:06.604 "bdev_name": "nvme1n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd10", 00:18:06.604 "bdev_name": "nvme2n1" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd11", 00:18:06.604 "bdev_name": "nvme2n2" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd12", 00:18:06.604 "bdev_name": "nvme2n3" 00:18:06.604 }, 00:18:06.604 { 00:18:06.604 "nbd_device": "/dev/nbd13", 00:18:06.604 "bdev_name": "nvme3n1" 00:18:06.604 } 00:18:06.604 ]' 00:18:06.604 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:06.863 /dev/nbd1 00:18:06.863 /dev/nbd10 00:18:06.863 /dev/nbd11 00:18:06.863 /dev/nbd12 00:18:06.863 /dev/nbd13' 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:06.863 /dev/nbd1 00:18:06.863 /dev/nbd10 00:18:06.863 /dev/nbd11 00:18:06.863 /dev/nbd12 00:18:06.863 /dev/nbd13' 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:06.863 256+0 records in 00:18:06.863 256+0 records out 00:18:06.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107106 s, 97.9 MB/s 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:06.863 256+0 records in 00:18:06.863 256+0 records out 00:18:06.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174295 s, 6.0 MB/s 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:06.863 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:07.122 256+0 records in 00:18:07.122 256+0 records out 00:18:07.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197469 s, 5.3 MB/s 00:18:07.122 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:07.122 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:07.381 256+0 records in 00:18:07.381 256+0 records out 00:18:07.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1985 s, 5.3 MB/s 00:18:07.381 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:07.381 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:07.381 256+0 records in 00:18:07.381 256+0 records out 00:18:07.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167482 s, 6.3 MB/s 00:18:07.640 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:07.640 17:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:07.640 256+0 records in 00:18:07.640 256+0 records out 00:18:07.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162054 s, 6.5 MB/s 00:18:07.640 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:07.640 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:07.899 256+0 records in 00:18:07.899 256+0 records out 00:18:07.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162848 s, 6.4 MB/s 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.899 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.158 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.416 17:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.674 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.932 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.190 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.448 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:09.707 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:09.707 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:09.707 17:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:18:09.707 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:09.966 malloc_lvol_verify 00:18:09.966 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:10.231 3e3d8f3a-0ea6-4847-adf2-eb4a715845e2 00:18:10.231 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:10.500 3ed918b3-6d4d-4896-89e7-30d5499959b8 00:18:10.500 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:10.500 /dev/nbd0 00:18:10.500 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:18:10.500 mke2fs 1.46.5 (30-Dec-2021) 00:18:10.500 Discarding device blocks: 0/4096 done 00:18:10.500 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:10.500 00:18:10.500 Allocating group tables: 0/1 done 00:18:10.500 Writing inode tables: 0/1 done 00:18:10.500 Creating journal (1024 blocks): done 00:18:10.500 Writing superblocks and filesystem accounting information: 0/1 done 00:18:10.500 00:18:10.500 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:18:10.500 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:10.500 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.501 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:10.501 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:10.501 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:10.501 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.501 17:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75526 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75526 ']' 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75526 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.759 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75526 00:18:11.017 killing process with pid 75526 00:18:11.017 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.017 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.017 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75526' 00:18:11.017 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75526 00:18:11.017 17:09:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75526 00:18:11.952 ************************************ 00:18:11.952 END TEST bdev_nbd 00:18:11.952 ************************************ 00:18:11.952 17:09:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:11.952 00:18:11.952 real 0m11.747s 00:18:11.952 user 0m16.144s 00:18:11.952 sys 0m4.066s 00:18:11.952 17:09:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.952 17:09:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:11.952 17:09:04 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:11.952 17:09:04 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:18:11.952 17:09:04 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:18:11.952 17:09:04 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:11.952 17:09:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:11.952 17:09:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.952 17:09:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:11.952 ************************************ 00:18:11.952 START TEST bdev_fio 00:18:11.952 ************************************ 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:11.952 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:11.952 ************************************ 00:18:11.952 START TEST bdev_fio_rw_verify 00:18:11.952 ************************************ 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:11.952 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:12.211 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.211 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.211 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:18:12.211 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.211 17:09:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:12.211 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:12.211 fio-3.35 00:18:12.211 Starting 6 threads 00:18:24.421 00:18:24.421 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75950: Thu Jul 25 17:09:15 2024 00:18:24.421 read: IOPS=28.9k, BW=113MiB/s (118MB/s)(1128MiB/10001msec) 00:18:24.421 slat (usec): min=2, max=4143, avg= 7.51, stdev=11.77 00:18:24.421 clat (usec): min=75, max=8211, avg=656.06, stdev=253.81 00:18:24.421 lat (usec): min=81, max=8219, avg=663.57, stdev=254.82 00:18:24.421 clat percentiles (usec): 00:18:24.421 | 50.000th=[ 685], 99.000th=[ 1205], 99.900th=[ 1844], 99.990th=[ 6521], 00:18:24.421 | 99.999th=[ 8225] 00:18:24.421 write: IOPS=29.2k, BW=114MiB/s (119MB/s)(1140MiB/10001msec); 0 zone resets 00:18:24.421 slat (usec): min=12, max=5186, avg=25.29, stdev=33.82 00:18:24.421 clat (usec): min=96, max=9266, avg=740.59, stdev=270.44 00:18:24.421 lat (usec): min=116, max=9284, avg=765.89, stdev=272.99 00:18:24.421 clat percentiles (usec): 00:18:24.421 | 50.000th=[ 750], 99.000th=[ 1401], 99.900th=[ 2180], 99.990th=[ 8291], 00:18:24.421 | 99.999th=[ 9110] 00:18:24.421 bw ( KiB/s): min=98158, max=145336, per=99.90%, avg=116564.68, stdev=2230.74, samples=114 00:18:24.421 iops : min=24539, max=36334, avg=29140.79, stdev=557.70, samples=114 00:18:24.421 lat (usec) : 100=0.01%, 250=2.87%, 500=17.92%, 750=36.31%, 1000=35.61% 00:18:24.421 lat (msec) : 2=7.17%, 4=0.05%, 10=0.06% 00:18:24.421 cpu : usr=59.13%, sys=27.10%, ctx=7963, majf=0, minf=24583 00:18:24.421 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.421 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.421 issued rwts: total=288645,291728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.421 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:24.421 00:18:24.421 Run status group 0 (all jobs): 00:18:24.421 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1128MiB (1182MB), run=10001-10001msec 00:18:24.421 WRITE: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=1140MiB (1195MB), run=10001-10001msec 00:18:24.421 ----------------------------------------------------- 00:18:24.421 Suppressions used: 00:18:24.421 count bytes template 00:18:24.421 6 48 /usr/src/fio/parse.c 00:18:24.421 2877 276192 /usr/src/fio/iolog.c 00:18:24.421 1 8 libtcmalloc_minimal.so 00:18:24.421 1 904 libcrypto.so 00:18:24.421 ----------------------------------------------------- 00:18:24.421 00:18:24.421 00:18:24.421 real 0m12.176s 00:18:24.421 user 0m37.153s 00:18:24.421 sys 0m16.649s 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:24.421 ************************************ 00:18:24.421 END TEST bdev_fio_rw_verify 00:18:24.421 ************************************ 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.421 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e87c71d8-1f11-4f0f-84f7-822ca4feabbe"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e87c71d8-1f11-4f0f-84f7-822ca4feabbe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "92d2c917-2139-4a39-834a-908241a548dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "92d2c917-2139-4a39-834a-908241a548dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8ae2f9c0-5421-4796-9e98-c391b050c1c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ae2f9c0-5421-4796-9e98-c391b050c1c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "6c2ad450-d461-4a81-9555-d02551c55baf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c2ad450-d461-4a81-9555-d02551c55baf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "86af8385-5429-4d1a-9be5-bae9c595c547"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "86af8385-5429-4d1a-9be5-bae9c595c547",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "eb23485e-7250-49d6-a39a-08753604948d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "eb23485e-7250-49d6-a39a-08753604948d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.422 /home/vagrant/spdk_repo/spdk 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:24.422 00:18:24.422 real 0m12.362s 00:18:24.422 user 0m37.254s 00:18:24.422 sys 0m16.731s 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.422 17:09:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:24.422 ************************************ 00:18:24.422 END TEST bdev_fio 00:18:24.422 ************************************ 00:18:24.422 17:09:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:24.422 17:09:16 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:24.422 17:09:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:24.422 17:09:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.422 17:09:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.422 ************************************ 00:18:24.422 START TEST bdev_verify 00:18:24.422 ************************************ 00:18:24.422 17:09:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:24.708 [2024-07-25 17:09:16.890742] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:24.708 [2024-07-25 17:09:16.890977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76121 ] 00:18:24.708 [2024-07-25 17:09:17.074328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.966 [2024-07-25 17:09:17.346411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.966 [2024-07-25 17:09:17.346426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.532 Running I/O for 5 seconds... 00:18:30.799 00:18:30.799 Latency(us) 00:18:30.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.799 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0xa0000 00:18:30.799 nvme0n1 : 5.05 1698.93 6.64 0.00 0.00 75206.06 14358.34 65774.31 00:18:30.799 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0xa0000 length 0xa0000 00:18:30.799 nvme0n1 : 5.06 1797.11 7.02 0.00 0.00 71094.39 6881.28 73400.32 00:18:30.799 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0xbd0bd 00:18:30.799 nvme1n1 : 5.04 3186.27 12.45 0.00 0.00 39988.07 5183.30 61484.68 00:18:30.799 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:30.799 nvme1n1 : 5.04 3298.87 12.89 0.00 0.00 38633.41 5093.93 71493.82 00:18:30.799 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0x80000 00:18:30.799 nvme2n1 : 5.04 1700.53 6.64 0.00 0.00 74803.88 10247.45 69587.32 00:18:30.799 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x80000 length 0x80000 00:18:30.799 nvme2n1 : 5.05 1798.25 7.02 0.00 0.00 70650.80 6255.71 74353.57 00:18:30.799 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0x80000 00:18:30.799 nvme2n2 : 5.04 1700.04 6.64 0.00 0.00 74728.16 10009.13 63391.19 00:18:30.799 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x80000 length 0x80000 00:18:30.799 nvme2n2 : 5.06 1796.51 7.02 0.00 0.00 70596.21 7626.01 69110.69 00:18:30.799 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0x80000 00:18:30.799 nvme2n3 : 5.05 1698.35 6.63 0.00 0.00 74662.26 13881.72 66727.56 00:18:30.799 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x80000 length 0x80000 00:18:30.799 nvme2n3 : 5.06 1797.69 7.02 0.00 0.00 70419.52 6672.76 65774.31 00:18:30.799 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x0 length 0x20000 00:18:30.799 nvme3n1 : 5.06 1720.24 6.72 0.00 0.00 73590.43 1079.85 74830.20 00:18:30.799 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.799 Verification LBA range: start 0x20000 length 0x20000 00:18:30.799 nvme3n1 : 5.07 1818.71 7.10 0.00 0.00 69520.41 1087.30 76260.07 00:18:30.799 =================================================================================================================== 00:18:30.799 Total : 24011.50 93.79 0.00 0.00 63521.18 1079.85 76260.07 00:18:31.731 00:18:31.731 real 0m7.203s 00:18:31.731 user 0m10.930s 00:18:31.731 sys 0m1.927s 00:18:31.731 17:09:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.731 17:09:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:31.731 ************************************ 00:18:31.731 END TEST bdev_verify 00:18:31.731 ************************************ 00:18:31.731 17:09:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:31.731 17:09:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:31.731 17:09:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.731 17:09:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.731 ************************************ 00:18:31.731 START TEST bdev_verify_big_io 00:18:31.731 ************************************ 00:18:31.731 17:09:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:31.731 [2024-07-25 17:09:24.112745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:31.731 [2024-07-25 17:09:24.112944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76226 ] 00:18:31.990 [2024-07-25 17:09:24.287953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:32.247 [2024-07-25 17:09:24.497680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.247 [2024-07-25 17:09:24.497693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.813 Running I/O for 5 seconds... 00:18:39.418 00:18:39.418 Latency(us) 00:18:39.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.418 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0xa000 00:18:39.418 nvme0n1 : 5.86 91.51 5.72 0.00 0.00 1351517.72 233546.47 3065654.92 00:18:39.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0xa000 length 0xa000 00:18:39.418 nvme0n1 : 5.90 132.96 8.31 0.00 0.00 931788.17 63391.19 991380.95 00:18:39.418 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0xbd0b 00:18:39.418 nvme1n1 : 5.83 170.01 10.63 0.00 0.00 710342.25 42896.29 1067641.02 00:18:39.418 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:39.418 nvme1n1 : 5.91 146.14 9.13 0.00 0.00 826794.01 64344.44 1159153.11 00:18:39.418 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0x8000 00:18:39.418 nvme2n1 : 5.87 162.16 10.13 0.00 0.00 735865.96 29908.25 1121023.07 00:18:39.418 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x8000 length 0x8000 00:18:39.418 nvme2n1 : 5.91 174.76 10.92 0.00 0.00 666922.90 57909.99 793104.76 00:18:39.418 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0x8000 00:18:39.418 nvme2n2 : 5.86 125.53 7.85 0.00 0.00 927678.43 20018.27 1082893.03 00:18:39.418 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x8000 length 0x8000 00:18:39.418 nvme2n2 : 5.85 103.90 6.49 0.00 0.00 1094252.08 138221.38 2013265.92 00:18:39.418 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0x8000 00:18:39.418 nvme2n3 : 5.87 174.33 10.90 0.00 0.00 648596.01 23473.80 621519.59 00:18:39.418 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x8000 length 0x8000 00:18:39.418 nvme2n3 : 5.91 113.73 7.11 0.00 0.00 975365.76 49092.42 2409818.30 00:18:39.418 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x0 length 0x2000 00:18:39.418 nvme3n1 : 5.87 128.18 8.01 0.00 0.00 857446.58 11677.32 2257298.15 00:18:39.418 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:39.418 Verification LBA range: start 0x2000 length 0x2000 00:18:39.418 nvme3n1 : 5.91 159.61 9.98 0.00 0.00 676797.54 9592.09 1662469.59 00:18:39.418 =================================================================================================================== 00:18:39.418 Total : 1682.82 105.18 0.00 0.00 830046.00 9592.09 3065654.92 00:18:39.995 00:18:39.995 real 0m8.338s 00:18:39.995 user 0m14.803s 00:18:39.995 sys 0m0.673s 00:18:39.995 17:09:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.995 17:09:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 ************************************ 00:18:39.995 END TEST bdev_verify_big_io 00:18:39.995 ************************************ 00:18:39.995 17:09:32 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:39.995 17:09:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:39.995 17:09:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.995 17:09:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 ************************************ 00:18:39.995 START TEST bdev_write_zeroes 00:18:39.995 ************************************ 00:18:39.995 17:09:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:40.253 [2024-07-25 17:09:32.511510] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:40.253 [2024-07-25 17:09:32.511723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76337 ] 00:18:40.253 [2024-07-25 17:09:32.691061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.511 [2024-07-25 17:09:32.947777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.077 Running I/O for 1 seconds... 00:18:42.011 00:18:42.011 Latency(us) 00:18:42.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.011 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme0n1 : 1.01 10518.65 41.09 0.00 0.00 12154.71 7149.38 21448.15 00:18:42.011 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme1n1 : 1.02 16792.65 65.60 0.00 0.00 7583.78 4408.79 15609.48 00:18:42.011 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme2n1 : 1.01 10476.91 40.93 0.00 0.00 12112.12 6851.49 18469.24 00:18:42.011 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme2n2 : 1.02 10464.63 40.88 0.00 0.00 12118.72 6911.07 19422.49 00:18:42.011 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme2n3 : 1.02 10534.13 41.15 0.00 0.00 12026.57 3991.74 20375.74 00:18:42.011 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:42.011 nvme3n1 : 1.02 10524.24 41.11 0.00 0.00 12026.94 4170.47 21328.99 00:18:42.011 =================================================================================================================== 00:18:42.011 Total : 69311.22 270.75 0.00 0.00 10995.33 3991.74 21448.15 00:18:43.388 00:18:43.388 real 0m3.027s 00:18:43.388 user 0m2.199s 00:18:43.388 sys 0m0.648s 00:18:43.388 ************************************ 00:18:43.388 END TEST bdev_write_zeroes 00:18:43.388 ************************************ 00:18:43.388 17:09:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.388 17:09:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:43.388 17:09:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.388 17:09:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:43.388 17:09:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.388 17:09:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:43.388 ************************************ 00:18:43.388 START TEST bdev_json_nonenclosed 00:18:43.388 ************************************ 00:18:43.388 17:09:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.388 [2024-07-25 17:09:35.574434] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:43.388 [2024-07-25 17:09:35.574681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76396 ] 00:18:43.388 [2024-07-25 17:09:35.749164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.646 [2024-07-25 17:09:35.941862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.646 [2024-07-25 17:09:35.942012] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:43.646 [2024-07-25 17:09:35.942051] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:43.646 [2024-07-25 17:09:35.942076] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:43.905 00:18:43.905 real 0m0.841s 00:18:43.905 user 0m0.579s 00:18:43.905 sys 0m0.156s 00:18:43.905 17:09:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.905 ************************************ 00:18:43.905 END TEST bdev_json_nonenclosed 00:18:43.905 ************************************ 00:18:43.905 17:09:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:43.905 17:09:36 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:43.905 17:09:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:43.905 17:09:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.905 17:09:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:44.163 ************************************ 00:18:44.163 START TEST bdev_json_nonarray 00:18:44.163 ************************************ 00:18:44.163 17:09:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:44.163 [2024-07-25 17:09:36.479658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:44.163 [2024-07-25 17:09:36.479862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76427 ] 00:18:44.422 [2024-07-25 17:09:36.654109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.422 [2024-07-25 17:09:36.848723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.422 [2024-07-25 17:09:36.848889] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:44.422 [2024-07-25 17:09:36.848931] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:44.422 [2024-07-25 17:09:36.848958] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:44.988 00:18:44.988 real 0m0.825s 00:18:44.988 user 0m0.582s 00:18:44.988 sys 0m0.138s 00:18:44.988 17:09:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.988 17:09:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:44.988 ************************************ 00:18:44.988 END TEST bdev_json_nonarray 00:18:44.988 ************************************ 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:44.988 17:09:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:45.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:50.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:50.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:50.818 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:50.818 00:18:50.818 real 1m4.358s 00:18:50.818 user 1m40.432s 00:18:50.818 sys 0m42.648s 00:18:50.818 17:09:42 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.818 17:09:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.818 ************************************ 00:18:50.818 END TEST blockdev_xnvme 00:18:50.818 ************************************ 00:18:50.818 17:09:42 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:50.818 17:09:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:50.818 17:09:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.818 17:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:50.818 ************************************ 00:18:50.818 START TEST ublk 00:18:50.818 ************************************ 00:18:50.818 17:09:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:50.818 * Looking for test storage... 00:18:50.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:50.818 17:09:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:50.818 17:09:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:50.818 17:09:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:50.818 17:09:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:50.818 17:09:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:50.818 17:09:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:50.818 17:09:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:50.818 17:09:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:50.818 17:09:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:50.818 17:09:43 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:50.818 17:09:43 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.818 17:09:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:50.818 ************************************ 00:18:50.818 START TEST test_save_ublk_config 00:18:50.818 ************************************ 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76716 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76716 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76716 ']' 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.818 17:09:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:50.818 [2024-07-25 17:09:43.204856] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:50.818 [2024-07-25 17:09:43.205065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76716 ] 00:18:51.079 [2024-07-25 17:09:43.368025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.339 [2024-07-25 17:09:43.580741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.904 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:51.904 [2024-07-25 17:09:44.345130] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:51.904 [2024-07-25 17:09:44.346473] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:52.162 malloc0 00:18:52.162 [2024-07-25 17:09:44.424208] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:52.162 [2024-07-25 17:09:44.424350] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:52.162 [2024-07-25 17:09:44.424375] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:52.162 [2024-07-25 17:09:44.424388] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:52.162 [2024-07-25 17:09:44.432148] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:52.162 [2024-07-25 17:09:44.432192] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:52.162 [2024-07-25 17:09:44.440047] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:52.162 [2024-07-25 17:09:44.440176] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:52.162 [2024-07-25 17:09:44.457034] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:52.162 0 00:18:52.162 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.162 17:09:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:52.162 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.162 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.421 17:09:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:52.421 "subsystems": [ 00:18:52.421 { 00:18:52.421 "subsystem": "keyring", 00:18:52.421 "config": [] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "iobuf", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "iobuf_set_options", 00:18:52.421 "params": { 00:18:52.421 "small_pool_count": 8192, 00:18:52.421 "large_pool_count": 1024, 00:18:52.421 "small_bufsize": 8192, 00:18:52.421 "large_bufsize": 135168 00:18:52.421 } 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "sock", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "sock_set_default_impl", 00:18:52.421 "params": { 00:18:52.421 "impl_name": "posix" 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "sock_impl_set_options", 00:18:52.421 "params": { 00:18:52.421 "impl_name": "ssl", 00:18:52.421 "recv_buf_size": 4096, 00:18:52.421 "send_buf_size": 4096, 00:18:52.421 "enable_recv_pipe": true, 00:18:52.421 "enable_quickack": false, 00:18:52.421 "enable_placement_id": 0, 00:18:52.421 "enable_zerocopy_send_server": true, 00:18:52.421 "enable_zerocopy_send_client": false, 00:18:52.421 "zerocopy_threshold": 0, 00:18:52.421 "tls_version": 0, 00:18:52.421 "enable_ktls": false 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "sock_impl_set_options", 00:18:52.421 "params": { 00:18:52.421 "impl_name": "posix", 00:18:52.421 "recv_buf_size": 2097152, 00:18:52.421 "send_buf_size": 2097152, 00:18:52.421 "enable_recv_pipe": true, 00:18:52.421 "enable_quickack": false, 00:18:52.421 "enable_placement_id": 0, 00:18:52.421 "enable_zerocopy_send_server": true, 00:18:52.421 "enable_zerocopy_send_client": false, 00:18:52.421 "zerocopy_threshold": 0, 00:18:52.421 "tls_version": 0, 00:18:52.421 "enable_ktls": false 00:18:52.421 } 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "vmd", 00:18:52.421 "config": [] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "accel", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "accel_set_options", 00:18:52.421 "params": { 00:18:52.421 "small_cache_size": 128, 00:18:52.421 "large_cache_size": 16, 00:18:52.421 "task_count": 2048, 00:18:52.421 "sequence_count": 2048, 00:18:52.421 "buf_count": 2048 00:18:52.421 } 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "bdev", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "bdev_set_options", 00:18:52.421 "params": { 00:18:52.421 "bdev_io_pool_size": 65535, 00:18:52.421 "bdev_io_cache_size": 256, 00:18:52.421 "bdev_auto_examine": true, 00:18:52.421 "iobuf_small_cache_size": 128, 00:18:52.421 "iobuf_large_cache_size": 16 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_raid_set_options", 00:18:52.421 "params": { 00:18:52.421 "process_window_size_kb": 1024, 00:18:52.421 "process_max_bandwidth_mb_sec": 0 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_iscsi_set_options", 00:18:52.421 "params": { 00:18:52.421 "timeout_sec": 30 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_nvme_set_options", 00:18:52.421 "params": { 00:18:52.421 "action_on_timeout": "none", 00:18:52.421 "timeout_us": 0, 00:18:52.421 "timeout_admin_us": 0, 00:18:52.421 "keep_alive_timeout_ms": 10000, 00:18:52.421 "arbitration_burst": 0, 00:18:52.421 "low_priority_weight": 0, 00:18:52.421 "medium_priority_weight": 0, 00:18:52.421 "high_priority_weight": 0, 00:18:52.421 "nvme_adminq_poll_period_us": 10000, 00:18:52.421 "nvme_ioq_poll_period_us": 0, 00:18:52.421 "io_queue_requests": 0, 00:18:52.421 "delay_cmd_submit": true, 00:18:52.421 "transport_retry_count": 4, 00:18:52.421 "bdev_retry_count": 3, 00:18:52.421 "transport_ack_timeout": 0, 00:18:52.421 "ctrlr_loss_timeout_sec": 0, 00:18:52.421 "reconnect_delay_sec": 0, 00:18:52.421 "fast_io_fail_timeout_sec": 0, 00:18:52.421 "disable_auto_failback": false, 00:18:52.421 "generate_uuids": false, 00:18:52.421 "transport_tos": 0, 00:18:52.421 "nvme_error_stat": false, 00:18:52.421 "rdma_srq_size": 0, 00:18:52.421 "io_path_stat": false, 00:18:52.421 "allow_accel_sequence": false, 00:18:52.421 "rdma_max_cq_size": 0, 00:18:52.421 "rdma_cm_event_timeout_ms": 0, 00:18:52.421 "dhchap_digests": [ 00:18:52.421 "sha256", 00:18:52.421 "sha384", 00:18:52.421 "sha512" 00:18:52.421 ], 00:18:52.421 "dhchap_dhgroups": [ 00:18:52.421 "null", 00:18:52.421 "ffdhe2048", 00:18:52.421 "ffdhe3072", 00:18:52.421 "ffdhe4096", 00:18:52.421 "ffdhe6144", 00:18:52.421 "ffdhe8192" 00:18:52.421 ] 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_nvme_set_hotplug", 00:18:52.421 "params": { 00:18:52.421 "period_us": 100000, 00:18:52.421 "enable": false 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_malloc_create", 00:18:52.421 "params": { 00:18:52.421 "name": "malloc0", 00:18:52.421 "num_blocks": 8192, 00:18:52.421 "block_size": 4096, 00:18:52.421 "physical_block_size": 4096, 00:18:52.421 "uuid": "f2d3ae23-4908-4b17-aae7-9b32ef658f0f", 00:18:52.421 "optimal_io_boundary": 0, 00:18:52.421 "md_size": 0, 00:18:52.421 "dif_type": 0, 00:18:52.421 "dif_is_head_of_md": false, 00:18:52.421 "dif_pi_format": 0 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "bdev_wait_for_examine" 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "scsi", 00:18:52.421 "config": null 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "scheduler", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "framework_set_scheduler", 00:18:52.421 "params": { 00:18:52.421 "name": "static" 00:18:52.421 } 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "vhost_scsi", 00:18:52.421 "config": [] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "vhost_blk", 00:18:52.421 "config": [] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "ublk", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "ublk_create_target", 00:18:52.421 "params": { 00:18:52.421 "cpumask": "1" 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "ublk_start_disk", 00:18:52.421 "params": { 00:18:52.421 "bdev_name": "malloc0", 00:18:52.421 "ublk_id": 0, 00:18:52.421 "num_queues": 1, 00:18:52.421 "queue_depth": 128 00:18:52.421 } 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "nbd", 00:18:52.421 "config": [] 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "subsystem": "nvmf", 00:18:52.421 "config": [ 00:18:52.421 { 00:18:52.421 "method": "nvmf_set_config", 00:18:52.421 "params": { 00:18:52.421 "discovery_filter": "match_any", 00:18:52.421 "admin_cmd_passthru": { 00:18:52.421 "identify_ctrlr": false 00:18:52.421 } 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "nvmf_set_max_subsystems", 00:18:52.421 "params": { 00:18:52.421 "max_subsystems": 1024 00:18:52.421 } 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "method": "nvmf_set_crdt", 00:18:52.422 "params": { 00:18:52.422 "crdt1": 0, 00:18:52.422 "crdt2": 0, 00:18:52.422 "crdt3": 0 00:18:52.422 } 00:18:52.422 } 00:18:52.422 ] 00:18:52.422 }, 00:18:52.422 { 00:18:52.422 "subsystem": "iscsi", 00:18:52.422 "config": [ 00:18:52.422 { 00:18:52.422 "method": "iscsi_set_options", 00:18:52.422 "params": { 00:18:52.422 "node_base": "iqn.2016-06.io.spdk", 00:18:52.422 "max_sessions": 128, 00:18:52.422 "max_connections_per_session": 2, 00:18:52.422 "max_queue_depth": 64, 00:18:52.422 "default_time2wait": 2, 00:18:52.422 "default_time2retain": 20, 00:18:52.422 "first_burst_length": 8192, 00:18:52.422 "immediate_data": true, 00:18:52.422 "allow_duplicated_isid": false, 00:18:52.422 "error_recovery_level": 0, 00:18:52.422 "nop_timeout": 60, 00:18:52.422 "nop_in_interval": 30, 00:18:52.422 "disable_chap": false, 00:18:52.422 "require_chap": false, 00:18:52.422 "mutual_chap": false, 00:18:52.422 "chap_group": 0, 00:18:52.422 "max_large_datain_per_connection": 64, 00:18:52.422 "max_r2t_per_connection": 4, 00:18:52.422 "pdu_pool_size": 36864, 00:18:52.422 "immediate_data_pool_size": 16384, 00:18:52.422 "data_out_pool_size": 2048 00:18:52.422 } 00:18:52.422 } 00:18:52.422 ] 00:18:52.422 } 00:18:52.422 ] 00:18:52.422 }' 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76716 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76716 ']' 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76716 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76716 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.422 killing process with pid 76716 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76716' 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76716 00:18:52.422 17:09:44 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76716 00:18:53.820 [2024-07-25 17:09:45.988110] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:53.820 [2024-07-25 17:09:46.024111] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:53.820 [2024-07-25 17:09:46.024298] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:53.820 [2024-07-25 17:09:46.034094] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:53.820 [2024-07-25 17:09:46.034167] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:53.820 [2024-07-25 17:09:46.034181] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:53.820 [2024-07-25 17:09:46.034220] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:53.820 [2024-07-25 17:09:46.034468] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76775 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76775 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76775 ']' 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.755 17:09:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:54.755 "subsystems": [ 00:18:54.755 { 00:18:54.755 "subsystem": "keyring", 00:18:54.755 "config": [] 00:18:54.755 }, 00:18:54.755 { 00:18:54.755 "subsystem": "iobuf", 00:18:54.755 "config": [ 00:18:54.755 { 00:18:54.755 "method": "iobuf_set_options", 00:18:54.755 "params": { 00:18:54.755 "small_pool_count": 8192, 00:18:54.755 "large_pool_count": 1024, 00:18:54.755 "small_bufsize": 8192, 00:18:54.755 "large_bufsize": 135168 00:18:54.755 } 00:18:54.755 } 00:18:54.755 ] 00:18:54.755 }, 00:18:54.755 { 00:18:54.755 "subsystem": "sock", 00:18:54.755 "config": [ 00:18:54.755 { 00:18:54.755 "method": "sock_set_default_impl", 00:18:54.755 "params": { 00:18:54.755 "impl_name": "posix" 00:18:54.755 } 00:18:54.755 }, 00:18:54.755 { 00:18:54.755 "method": "sock_impl_set_options", 00:18:54.755 "params": { 00:18:54.755 "impl_name": "ssl", 00:18:54.755 "recv_buf_size": 4096, 00:18:54.755 "send_buf_size": 4096, 00:18:54.755 "enable_recv_pipe": true, 00:18:54.755 "enable_quickack": false, 00:18:54.755 "enable_placement_id": 0, 00:18:54.755 "enable_zerocopy_send_server": true, 00:18:54.755 "enable_zerocopy_send_client": false, 00:18:54.755 "zerocopy_threshold": 0, 00:18:54.755 "tls_version": 0, 00:18:54.755 "enable_ktls": false 00:18:54.755 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "sock_impl_set_options", 00:18:54.756 "params": { 00:18:54.756 "impl_name": "posix", 00:18:54.756 "recv_buf_size": 2097152, 00:18:54.756 "send_buf_size": 2097152, 00:18:54.756 "enable_recv_pipe": true, 00:18:54.756 "enable_quickack": false, 00:18:54.756 "enable_placement_id": 0, 00:18:54.756 "enable_zerocopy_send_server": true, 00:18:54.756 "enable_zerocopy_send_client": false, 00:18:54.756 "zerocopy_threshold": 0, 00:18:54.756 "tls_version": 0, 00:18:54.756 "enable_ktls": false 00:18:54.756 } 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "vmd", 00:18:54.756 "config": [] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "accel", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "accel_set_options", 00:18:54.756 "params": { 00:18:54.756 "small_cache_size": 128, 00:18:54.756 "large_cache_size": 16, 00:18:54.756 "task_count": 2048, 00:18:54.756 "sequence_count": 2048, 00:18:54.756 "buf_count": 2048 00:18:54.756 } 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "bdev", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "bdev_set_options", 00:18:54.756 "params": { 00:18:54.756 "bdev_io_pool_size": 65535, 00:18:54.756 "bdev_io_cache_size": 256, 00:18:54.756 "bdev_auto_examine": true, 00:18:54.756 "iobuf_small_cache_size": 128, 00:18:54.756 "iobuf_large_cache_size": 16 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_raid_set_options", 00:18:54.756 "params": { 00:18:54.756 "process_window_size_kb": 1024, 00:18:54.756 "process_max_bandwidth_mb_sec": 0 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_iscsi_set_options", 00:18:54.756 "params": { 00:18:54.756 "timeout_sec": 30 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_nvme_set_options", 00:18:54.756 "params": { 00:18:54.756 "action_on_timeout": "none", 00:18:54.756 "timeout_us": 0, 00:18:54.756 "timeout_admin_us": 0, 00:18:54.756 "keep_alive_timeout_ms": 10000, 00:18:54.756 "arbitration_burst": 0, 00:18:54.756 "low_priority_weight": 0, 00:18:54.756 "medium_priority_weight": 0, 00:18:54.756 "high_priority_weight": 0, 00:18:54.756 "nvme_adminq_poll_period_us": 10000, 00:18:54.756 "nvme_ioq_poll_period_us": 0, 00:18:54.756 "io_queue_requests": 0, 00:18:54.756 "delay_cmd_submit": true, 00:18:54.756 "transport_retry_count": 4, 00:18:54.756 "bdev_retry_count": 3, 00:18:54.756 "transport_ack_timeout": 0, 00:18:54.756 "ctrlr_loss_timeout_sec": 0, 00:18:54.756 "reconnect_delay_sec": 0, 00:18:54.756 "fast_io_fail_timeout_sec": 0, 00:18:54.756 "disable_auto_failback": false, 00:18:54.756 "generate_uuids": false, 00:18:54.756 "transport_tos": 0, 00:18:54.756 "nvme_error_stat": false, 00:18:54.756 "rdma_srq_size": 0, 00:18:54.756 "io_path_stat": false, 00:18:54.756 "allow_accel_sequence": false, 00:18:54.756 "rdma_max_cq_size": 0, 00:18:54.756 "rdma_cm_event_timeout_ms": 0, 00:18:54.756 "dhchap_digests": [ 00:18:54.756 "sha256", 00:18:54.756 "sha384", 00:18:54.756 "sha512" 00:18:54.756 ], 00:18:54.756 "dhchap_dhgroups": [ 00:18:54.756 "null", 00:18:54.756 "ffdhe2048", 00:18:54.756 "ffdhe3072", 00:18:54.756 "ffdhe4096", 00:18:54.756 "ffdhe6144", 00:18:54.756 "ffdhe8192" 00:18:54.756 ] 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_nvme_set_hotplug", 00:18:54.756 "params": { 00:18:54.756 "period_us": 100000, 00:18:54.756 "enable": false 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_malloc_create", 00:18:54.756 "params": { 00:18:54.756 "name": "malloc0", 00:18:54.756 "num_blocks": 8192, 00:18:54.756 "block_size": 4096, 00:18:54.756 "physical_block_size": 4096, 00:18:54.756 "uuid": "f2d3ae23-4908-4b17-aae7-9b32ef658f0f", 00:18:54.756 "optimal_io_boundary": 0, 00:18:54.756 "md_size": 0, 00:18:54.756 "dif_type": 0, 00:18:54.756 "dif_is_head_of_md": false, 00:18:54.756 "dif_pi_format": 0 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "bdev_wait_for_examine" 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "scsi", 00:18:54.756 "config": null 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "scheduler", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "framework_set_scheduler", 00:18:54.756 "params": { 00:18:54.756 "name": "static" 00:18:54.756 } 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "vhost_scsi", 00:18:54.756 "config": [] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "vhost_blk", 00:18:54.756 "config": [] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "ublk", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "ublk_create_target", 00:18:54.756 "params": { 00:18:54.756 "cpumask": "1" 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "ublk_start_disk", 00:18:54.756 "params": { 00:18:54.756 "bdev_name": "malloc0", 00:18:54.756 "ublk_id": 0, 00:18:54.756 "num_queues": 1, 00:18:54.756 "queue_depth": 128 00:18:54.756 } 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "nbd", 00:18:54.756 "config": [] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "nvmf", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "nvmf_set_config", 00:18:54.756 "params": { 00:18:54.756 "discovery_filter": "match_any", 00:18:54.756 "admin_cmd_passthru": { 00:18:54.756 "identify_ctrlr": false 00:18:54.756 } 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "nvmf_set_max_subsystems", 00:18:54.756 "params": { 00:18:54.756 "max_subsystems": 1024 00:18:54.756 } 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "method": "nvmf_set_crdt", 00:18:54.756 "params": { 00:18:54.756 "crdt1": 0, 00:18:54.756 "crdt2": 0, 00:18:54.756 "crdt3": 0 00:18:54.756 } 00:18:54.756 } 00:18:54.756 ] 00:18:54.756 }, 00:18:54.756 { 00:18:54.756 "subsystem": "iscsi", 00:18:54.756 "config": [ 00:18:54.756 { 00:18:54.756 "method": "iscsi_set_options", 00:18:54.756 "params": { 00:18:54.756 "node_base": "iqn.2016-06.io.spdk", 00:18:54.756 "max_sessions": 128, 00:18:54.756 "max_connections_per_session": 2, 00:18:54.756 "max_queue_depth": 64, 00:18:54.757 "default_time2wait": 2, 00:18:54.757 "default_time2retain": 20, 00:18:54.757 "first_burst_length": 8192, 00:18:54.757 "immediate_data": true, 00:18:54.757 "allow_duplicated_isid": false, 00:18:54.757 "error_recovery_level": 0, 00:18:54.757 "nop_timeout": 60, 00:18:54.757 "nop_in_interval": 30, 00:18:54.757 "disable_chap": false, 00:18:54.757 "require_chap": false, 00:18:54.757 "mutual_chap": false, 00:18:54.757 "chap_group": 0, 00:18:54.757 "max_large_datain_per_connection": 64, 00:18:54.757 "max_r2t_per_connection": 4, 00:18:54.757 "pdu_pool_size": 36864, 00:18:54.757 "immediate_data_pool_size": 16384, 00:18:54.757 "data_out_pool_size": 2048 00:18:54.757 } 00:18:54.757 } 00:18:54.757 ] 00:18:54.757 } 00:18:54.757 ] 00:18:54.757 }' 00:18:54.757 17:09:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:55.015 [2024-07-25 17:09:47.324733] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:55.015 [2024-07-25 17:09:47.324952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76775 ] 00:18:55.274 [2024-07-25 17:09:47.494623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.274 [2024-07-25 17:09:47.711703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.209 [2024-07-25 17:09:48.614054] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:56.209 [2024-07-25 17:09:48.615416] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:56.209 [2024-07-25 17:09:48.621374] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:56.209 [2024-07-25 17:09:48.621540] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:56.209 [2024-07-25 17:09:48.621558] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:56.209 [2024-07-25 17:09:48.621567] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:56.209 [2024-07-25 17:09:48.630159] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:56.209 [2024-07-25 17:09:48.630205] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:56.209 [2024-07-25 17:09:48.640097] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:56.209 [2024-07-25 17:09:48.640227] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:56.209 [2024-07-25 17:09:48.656069] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76775 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76775 ']' 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76775 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76775 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.467 killing process with pid 76775 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76775' 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76775 00:18:56.467 17:09:48 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76775 00:18:57.842 [2024-07-25 17:09:50.147019] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:57.842 [2024-07-25 17:09:50.182012] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:57.842 [2024-07-25 17:09:50.182210] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:57.842 [2024-07-25 17:09:50.190060] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:57.842 [2024-07-25 17:09:50.190144] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:57.842 [2024-07-25 17:09:50.190159] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:57.842 [2024-07-25 17:09:50.190193] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:57.842 [2024-07-25 17:09:50.190445] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:59.219 17:09:51 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:59.219 00:18:59.219 real 0m8.236s 00:18:59.219 user 0m6.824s 00:18:59.219 sys 0m2.211s 00:18:59.219 17:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.219 17:09:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:59.219 ************************************ 00:18:59.219 END TEST test_save_ublk_config 00:18:59.219 ************************************ 00:18:59.219 17:09:51 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76854 00:18:59.219 17:09:51 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:59.219 17:09:51 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.219 17:09:51 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76854 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@831 -- # '[' -z 76854 ']' 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.219 17:09:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:59.219 [2024-07-25 17:09:51.508543] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:59.219 [2024-07-25 17:09:51.508745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76854 ] 00:18:59.219 [2024-07-25 17:09:51.684029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:59.477 [2024-07-25 17:09:51.893109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.477 [2024-07-25 17:09:51.893119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.413 17:09:52 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.413 17:09:52 ublk -- common/autotest_common.sh@864 -- # return 0 00:19:00.413 17:09:52 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:00.413 17:09:52 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:00.413 17:09:52 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.413 17:09:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.413 ************************************ 00:19:00.413 START TEST test_create_ublk 00:19:00.413 ************************************ 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:19:00.413 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.413 [2024-07-25 17:09:52.663061] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:00.413 [2024-07-25 17:09:52.670123] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.413 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:00.413 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.413 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.671 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:00.671 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.671 [2024-07-25 17:09:52.940260] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:00.671 [2024-07-25 17:09:52.940968] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:00.671 [2024-07-25 17:09:52.941040] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:00.671 [2024-07-25 17:09:52.941057] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:00.671 [2024-07-25 17:09:52.947097] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:00.671 [2024-07-25 17:09:52.947137] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:00.671 [2024-07-25 17:09:52.956065] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:00.671 [2024-07-25 17:09:52.965270] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:00.671 [2024-07-25 17:09:52.991058] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.671 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:00.671 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:00.671 17:09:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.671 17:09:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:00.671 17:09:53 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:00.671 { 00:19:00.671 "ublk_device": "/dev/ublkb0", 00:19:00.671 "id": 0, 00:19:00.671 "queue_depth": 512, 00:19:00.671 "num_queues": 4, 00:19:00.671 "bdev_name": "Malloc0" 00:19:00.671 } 00:19:00.671 ]' 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:00.671 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:00.929 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:00.930 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:00.930 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:00.930 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:00.930 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:00.930 17:09:53 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:00.930 17:09:53 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:01.188 fio: verification read phase will never start because write phase uses all of runtime 00:19:01.188 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:01.188 fio-3.35 00:19:01.188 Starting 1 process 00:19:11.153 00:19:11.153 fio_test: (groupid=0, jobs=1): err= 0: pid=76904: Thu Jul 25 17:10:03 2024 00:19:11.153 write: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(427MiB/10001msec); 0 zone resets 00:19:11.153 clat (usec): min=63, max=7902, avg=90.10, stdev=159.36 00:19:11.153 lat (usec): min=64, max=7903, avg=90.80, stdev=159.38 00:19:11.153 clat percentiles (usec): 00:19:11.153 | 1.00th=[ 71], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 74], 00:19:11.153 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 79], 00:19:11.153 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 110], 00:19:11.153 | 99.00th=[ 139], 99.50th=[ 163], 99.90th=[ 3228], 99.95th=[ 3523], 00:19:11.153 | 99.99th=[ 3818] 00:19:11.153 bw ( KiB/s): min=20408, max=46504, per=99.83%, avg=43694.32, stdev=5699.73, samples=19 00:19:11.153 iops : min= 5102, max=11626, avg=10923.58, stdev=1424.93, samples=19 00:19:11.153 lat (usec) : 100=91.48%, 250=8.14%, 500=0.01%, 750=0.01%, 1000=0.02% 00:19:11.153 lat (msec) : 2=0.10%, 4=0.24%, 10=0.01% 00:19:11.153 cpu : usr=2.62%, sys=7.72%, ctx=109432, majf=0, minf=796 00:19:11.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:11.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.153 issued rwts: total=0,109430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:11.153 00:19:11.153 Run status group 0 (all jobs): 00:19:11.153 WRITE: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=427MiB (448MB), run=10001-10001msec 00:19:11.153 00:19:11.153 Disk stats (read/write): 00:19:11.153 ublkb0: ios=0/108257, merge=0/0, ticks=0/8947, in_queue=8947, util=99.10% 00:19:11.153 17:10:03 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.153 [2024-07-25 17:10:03.528795] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:11.153 [2024-07-25 17:10:03.582579] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:11.153 [2024-07-25 17:10:03.584014] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:11.153 [2024-07-25 17:10:03.588034] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:11.153 [2024-07-25 17:10:03.588432] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:11.153 [2024-07-25 17:10:03.588469] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.153 17:10:03 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.153 [2024-07-25 17:10:03.604145] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:11.153 request: 00:19:11.153 { 00:19:11.153 "ublk_id": 0, 00:19:11.153 "method": "ublk_stop_disk", 00:19:11.153 "req_id": 1 00:19:11.153 } 00:19:11.153 Got JSON-RPC error response 00:19:11.153 response: 00:19:11.153 { 00:19:11.153 "code": -19, 00:19:11.153 "message": "No such device" 00:19:11.153 } 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.153 17:10:03 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.153 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.153 [2024-07-25 17:10:03.620159] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:11.410 [2024-07-25 17:10:03.627060] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:11.410 [2024-07-25 17:10:03.627122] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:11.410 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.410 17:10:03 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:11.410 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.410 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.669 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.669 17:10:03 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:11.670 17:10:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:11.670 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.670 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.670 17:10:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:11.670 17:10:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:11.670 17:10:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:11.670 17:10:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:11.670 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.670 17:10:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 17:10:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.670 17:10:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:11.670 17:10:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:11.670 17:10:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:11.670 00:19:11.670 real 0m11.399s 00:19:11.670 user 0m0.718s 00:19:11.670 sys 0m0.856s 00:19:11.670 17:10:04 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.670 17:10:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 ************************************ 00:19:11.670 END TEST test_create_ublk 00:19:11.670 ************************************ 00:19:11.670 17:10:04 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:11.670 17:10:04 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:11.670 17:10:04 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.670 17:10:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 ************************************ 00:19:11.670 START TEST test_create_multi_ublk 00:19:11.670 ************************************ 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 [2024-07-25 17:10:04.112065] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:11.670 [2024-07-25 17:10:04.114999] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.670 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.927 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.927 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:11.927 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:11.927 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.927 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:11.927 [2024-07-25 17:10:04.373233] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:11.927 [2024-07-25 17:10:04.373842] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:11.927 [2024-07-25 17:10:04.373910] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:11.927 [2024-07-25 17:10:04.373921] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:11.927 [2024-07-25 17:10:04.382431] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:11.927 [2024-07-25 17:10:04.382474] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:11.927 [2024-07-25 17:10:04.389146] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:11.927 [2024-07-25 17:10:04.390088] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:12.184 [2024-07-25 17:10:04.398294] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:12.184 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.184 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:12.184 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:12.184 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:12.184 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.185 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.443 [2024-07-25 17:10:04.665253] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:12.443 [2024-07-25 17:10:04.665891] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:12.443 [2024-07-25 17:10:04.665942] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:12.443 [2024-07-25 17:10:04.665956] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:12.443 [2024-07-25 17:10:04.673052] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:12.443 [2024-07-25 17:10:04.673104] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:12.443 [2024-07-25 17:10:04.680047] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:12.443 [2024-07-25 17:10:04.680942] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:12.443 [2024-07-25 17:10:04.703033] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.443 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.701 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.701 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:12.701 17:10:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:12.701 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.701 17:10:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.701 [2024-07-25 17:10:04.973250] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:12.701 [2024-07-25 17:10:04.973878] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:12.701 [2024-07-25 17:10:04.973911] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:12.701 [2024-07-25 17:10:04.973921] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:12.701 [2024-07-25 17:10:04.979046] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:12.701 [2024-07-25 17:10:04.979103] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:12.701 [2024-07-25 17:10:04.989031] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:12.701 [2024-07-25 17:10:04.989905] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:12.701 [2024-07-25 17:10:04.998086] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.701 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.959 [2024-07-25 17:10:05.280245] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:12.959 [2024-07-25 17:10:05.280914] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:12.959 [2024-07-25 17:10:05.280941] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:12.959 [2024-07-25 17:10:05.280966] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:12.959 [2024-07-25 17:10:05.288090] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:12.959 [2024-07-25 17:10:05.288142] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:12.959 [2024-07-25 17:10:05.296115] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:12.959 [2024-07-25 17:10:05.297115] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:12.959 [2024-07-25 17:10:05.305075] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:12.959 { 00:19:12.959 "ublk_device": "/dev/ublkb0", 00:19:12.959 "id": 0, 00:19:12.959 "queue_depth": 512, 00:19:12.959 "num_queues": 4, 00:19:12.959 "bdev_name": "Malloc0" 00:19:12.959 }, 00:19:12.959 { 00:19:12.959 "ublk_device": "/dev/ublkb1", 00:19:12.959 "id": 1, 00:19:12.959 "queue_depth": 512, 00:19:12.959 "num_queues": 4, 00:19:12.959 "bdev_name": "Malloc1" 00:19:12.959 }, 00:19:12.959 { 00:19:12.959 "ublk_device": "/dev/ublkb2", 00:19:12.959 "id": 2, 00:19:12.959 "queue_depth": 512, 00:19:12.959 "num_queues": 4, 00:19:12.959 "bdev_name": "Malloc2" 00:19:12.959 }, 00:19:12.959 { 00:19:12.959 "ublk_device": "/dev/ublkb3", 00:19:12.959 "id": 3, 00:19:12.959 "queue_depth": 512, 00:19:12.959 "num_queues": 4, 00:19:12.959 "bdev_name": "Malloc3" 00:19:12.959 } 00:19:12.959 ]' 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:12.959 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:13.217 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:13.474 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:13.475 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:13.732 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:13.732 17:10:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.732 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.990 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.248 [2024-07-25 17:10:06.462528] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:14.248 [2024-07-25 17:10:06.509659] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:14.248 [2024-07-25 17:10:06.512374] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:14.248 [2024-07-25 17:10:06.518092] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:14.248 [2024-07-25 17:10:06.518565] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:14.248 [2024-07-25 17:10:06.518606] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.248 [2024-07-25 17:10:06.525398] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:14.248 [2024-07-25 17:10:06.562560] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:14.248 [2024-07-25 17:10:06.564171] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:14.248 [2024-07-25 17:10:06.570066] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:14.248 [2024-07-25 17:10:06.570385] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:14.248 [2024-07-25 17:10:06.570399] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.248 [2024-07-25 17:10:06.585145] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:14.248 [2024-07-25 17:10:06.625565] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:14.248 [2024-07-25 17:10:06.627144] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:14.248 [2024-07-25 17:10:06.629650] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:14.248 [2024-07-25 17:10:06.630045] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:14.248 [2024-07-25 17:10:06.630083] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:14.248 [2024-07-25 17:10:06.647169] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:14.248 [2024-07-25 17:10:06.685556] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:14.248 [2024-07-25 17:10:06.687017] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:14.248 [2024-07-25 17:10:06.693124] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:14.248 [2024-07-25 17:10:06.693499] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:14.248 [2024-07-25 17:10:06.693544] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.248 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:14.506 [2024-07-25 17:10:06.959234] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:14.506 [2024-07-25 17:10:06.969055] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:14.506 [2024-07-25 17:10:06.969147] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:14.763 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:14.763 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:14.763 17:10:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:14.763 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.763 17:10:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.021 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.021 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:15.021 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.021 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.280 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.280 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.280 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:15.280 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.280 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.538 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.538 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:15.538 17:10:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:15.538 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.538 17:10:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:15.796 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:16.056 17:10:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:16.056 00:19:16.056 real 0m4.201s 00:19:16.056 user 0m1.378s 00:19:16.056 sys 0m0.171s 00:19:16.056 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.056 17:10:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.056 ************************************ 00:19:16.056 END TEST test_create_multi_ublk 00:19:16.056 ************************************ 00:19:16.056 17:10:08 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:16.056 17:10:08 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:16.056 17:10:08 ublk -- ublk/ublk.sh@130 -- # killprocess 76854 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@950 -- # '[' -z 76854 ']' 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@954 -- # kill -0 76854 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@955 -- # uname 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76854 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:16.056 killing process with pid 76854 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76854' 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@969 -- # kill 76854 00:19:16.056 17:10:08 ublk -- common/autotest_common.sh@974 -- # wait 76854 00:19:16.990 [2024-07-25 17:10:09.299028] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:19:16.990 [2024-07-25 17:10:09.299106] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:19:17.922 ************************************ 00:19:17.922 END TEST ublk 00:19:17.922 ************************************ 00:19:17.922 00:19:17.922 real 0m27.380s 00:19:17.922 user 0m41.008s 00:19:17.922 sys 0m8.629s 00:19:17.922 17:10:10 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:17.922 17:10:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.181 17:10:10 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:18.181 17:10:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:18.181 17:10:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.181 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.181 ************************************ 00:19:18.181 START TEST ublk_recovery 00:19:18.181 ************************************ 00:19:18.181 17:10:10 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:18.181 * Looking for test storage... 00:19:18.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:18.181 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:18.181 17:10:10 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:18.181 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:18.181 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77238 00:19:18.182 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:18.182 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.182 17:10:10 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77238 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77238 ']' 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.182 17:10:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.182 [2024-07-25 17:10:10.614870] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:18.182 [2024-07-25 17:10:10.615059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77238 ] 00:19:18.441 [2024-07-25 17:10:10.773564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.700 [2024-07-25 17:10:10.971532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.700 [2024-07-25 17:10:10.971534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.266 17:10:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.266 17:10:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:19:19.266 17:10:11 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:19.266 17:10:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.266 17:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.266 [2024-07-25 17:10:11.721060] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:19.266 [2024-07-25 17:10:11.724106] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:19.266 17:10:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.267 17:10:11 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:19.267 17:10:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.267 17:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.526 malloc0 00:19:19.526 17:10:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.526 17:10:11 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:19.526 17:10:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.526 17:10:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.526 [2024-07-25 17:10:11.862215] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:19.526 [2024-07-25 17:10:11.862389] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:19.526 [2024-07-25 17:10:11.862405] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:19.526 [2024-07-25 17:10:11.862417] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:19.526 [2024-07-25 17:10:11.870059] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:19.526 [2024-07-25 17:10:11.870097] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:19.526 [2024-07-25 17:10:11.877057] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:19.526 [2024-07-25 17:10:11.877258] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:19.526 [2024-07-25 17:10:11.906022] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:19.526 1 00:19:19.526 17:10:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.526 17:10:11 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:20.521 17:10:12 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77273 00:19:20.521 17:10:12 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:20.521 17:10:12 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:20.779 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:20.779 fio-3.35 00:19:20.779 Starting 1 process 00:19:26.048 17:10:17 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77238 00:19:26.048 17:10:17 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:31.316 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77238 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:31.316 17:10:22 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77383 00:19:31.316 17:10:22 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:31.316 17:10:22 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.316 17:10:22 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77383 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77383 ']' 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.316 17:10:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.316 [2024-07-25 17:10:23.048470] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:31.316 [2024-07-25 17:10:23.049412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77383 ] 00:19:31.316 [2024-07-25 17:10:23.226600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.316 [2024-07-25 17:10:23.426712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.316 [2024-07-25 17:10:23.426712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:19:31.882 17:10:24 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.882 [2024-07-25 17:10:24.183067] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:31.882 [2024-07-25 17:10:24.186084] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.882 17:10:24 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.882 malloc0 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.882 17:10:24 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.882 [2024-07-25 17:10:24.321229] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:31.882 [2024-07-25 17:10:24.321307] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:31.882 [2024-07-25 17:10:24.321321] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:31.882 1 00:19:31.882 17:10:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.883 17:10:24 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77273 00:19:31.883 [2024-07-25 17:10:24.330030] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:31.883 [2024-07-25 17:10:24.330062] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:31.883 [2024-07-25 17:10:24.330179] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:58.419 [2024-07-25 17:10:48.331046] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:58.419 [2024-07-25 17:10:48.335482] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:58.419 [2024-07-25 17:10:48.345259] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:58.419 [2024-07-25 17:10:48.345316] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:25.044 00:20:25.044 fio_test: (groupid=0, jobs=1): err= 0: pid=77276: Thu Jul 25 17:11:13 2024 00:20:25.044 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(2455MiB/60003msec) 00:20:25.044 slat (nsec): min=1779, max=763281, avg=6158.70, stdev=4492.15 00:20:25.044 clat (usec): min=920, max=30431k, avg=5811.31, stdev=294775.09 00:20:25.044 lat (usec): min=925, max=30431k, avg=5817.47, stdev=294775.08 00:20:25.044 clat percentiles (usec): 00:20:25.044 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2671], 00:20:25.044 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:20:25.044 | 70.00th=[ 2900], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 4047], 00:20:25.044 | 99.00th=[ 6325], 99.50th=[ 6915], 99.90th=[ 8717], 99.95th=[ 9765], 00:20:25.045 | 99.99th=[13829] 00:20:25.045 bw ( KiB/s): min=25560, max=92016, per=100.00%, avg=83958.51, stdev=11283.00, samples=59 00:20:25.045 iops : min= 6390, max=23004, avg=20989.63, stdev=2820.75, samples=59 00:20:25.045 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(2453MiB/60003msec); 0 zone resets 00:20:25.045 slat (nsec): min=1968, max=1467.6k, avg=6173.62, stdev=5108.66 00:20:25.045 clat (usec): min=929, max=30431k, avg=6398.62, stdev=318898.11 00:20:25.045 lat (usec): min=935, max=30431k, avg=6404.80, stdev=318898.11 00:20:25.045 clat percentiles (msec): 00:20:25.045 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:20:25.045 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:20:25.045 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:20:25.045 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 10], 00:20:25.045 | 99.99th=[17113] 00:20:25.045 bw ( KiB/s): min=24400, max=92392, per=100.00%, avg=83877.69, stdev=11266.09, samples=59 00:20:25.045 iops : min= 6100, max=23098, avg=20969.46, stdev=2816.53, samples=59 00:20:25.045 lat (usec) : 1000=0.01% 00:20:25.045 lat (msec) : 2=0.15%, 4=94.87%, 10=4.94%, 20=0.03%, >=2000=0.01% 00:20:25.045 cpu : usr=5.74%, sys=11.81%, ctx=37334, majf=0, minf=13 00:20:25.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.045 issued rwts: total=628557,628075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.045 00:20:25.045 Run status group 0 (all jobs): 00:20:25.045 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=2455MiB (2575MB), run=60003-60003msec 00:20:25.045 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=2453MiB (2573MB), run=60003-60003msec 00:20:25.045 00:20:25.045 Disk stats (read/write): 00:20:25.045 ublkb1: ios=626166/625564, merge=0/0, ticks=3583980/3882953, in_queue=7466933, util=99.93% 00:20:25.045 17:11:13 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.045 [2024-07-25 17:11:13.179901] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:25.045 [2024-07-25 17:11:13.228197] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:25.045 [2024-07-25 17:11:13.228481] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:25.045 [2024-07-25 17:11:13.236178] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:25.045 [2024-07-25 17:11:13.236315] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:25.045 [2024-07-25 17:11:13.236340] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.045 17:11:13 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.045 [2024-07-25 17:11:13.250178] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.045 [2024-07-25 17:11:13.259467] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:25.045 [2024-07-25 17:11:13.259536] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.045 17:11:13 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:25.045 17:11:13 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:25.045 17:11:13 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77383 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77383 ']' 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77383 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77383 00:20:25.045 killing process with pid 77383 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77383' 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77383 00:20:25.045 17:11:13 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77383 00:20:25.045 [2024-07-25 17:11:14.206495] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.045 [2024-07-25 17:11:14.206560] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:20:25.045 00:20:25.045 real 1m4.947s 00:20:25.045 user 1m51.304s 00:20:25.045 sys 0m18.277s 00:20:25.045 ************************************ 00:20:25.045 END TEST ublk_recovery 00:20:25.045 ************************************ 00:20:25.045 17:11:15 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:25.045 17:11:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.045 17:11:15 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@264 -- # timing_exit lib 00:20:25.045 17:11:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.045 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:20:25.045 17:11:15 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:20:25.045 17:11:15 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.045 17:11:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:25.045 17:11:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.045 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:20:25.045 ************************************ 00:20:25.045 START TEST ftl 00:20:25.045 ************************************ 00:20:25.045 17:11:15 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.045 * Looking for test storage... 00:20:25.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.045 17:11:15 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:25.045 17:11:15 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:25.045 17:11:15 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.045 17:11:15 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.045 17:11:15 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:25.045 17:11:15 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:25.045 17:11:15 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.045 17:11:15 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.045 17:11:15 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.045 17:11:15 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.045 17:11:15 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.045 17:11:15 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:25.045 17:11:15 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:25.045 17:11:15 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.045 17:11:15 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.045 17:11:15 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:25.045 17:11:15 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.045 17:11:15 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.045 17:11:15 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.045 17:11:15 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.045 17:11:15 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:25.045 17:11:15 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:25.045 17:11:15 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.045 17:11:15 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.045 17:11:15 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.045 17:11:15 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:25.046 17:11:15 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:25.046 17:11:15 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:25.046 17:11:15 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:25.046 17:11:15 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:25.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.046 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.046 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.046 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.046 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:25.046 17:11:16 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78165 00:20:25.046 17:11:16 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:25.046 17:11:16 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78165 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@831 -- # '[' -z 78165 ']' 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.046 17:11:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:25.046 [2024-07-25 17:11:16.181877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:25.046 [2024-07-25 17:11:16.182069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78165 ] 00:20:25.046 [2024-07-25 17:11:16.342399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.046 [2024-07-25 17:11:16.549950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.046 17:11:17 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.046 17:11:17 ftl -- common/autotest_common.sh@864 -- # return 0 00:20:25.046 17:11:17 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:25.046 17:11:17 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:25.981 17:11:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:25.981 17:11:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:26.546 17:11:18 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:26.546 17:11:18 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:26.546 17:11:18 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@50 -- # break 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:26.803 17:11:19 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:27.060 17:11:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:27.060 17:11:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:27.060 17:11:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:27.060 17:11:19 ftl -- ftl/ftl.sh@63 -- # break 00:20:27.060 17:11:19 ftl -- ftl/ftl.sh@66 -- # killprocess 78165 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@950 -- # '[' -z 78165 ']' 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@954 -- # kill -0 78165 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@955 -- # uname 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78165 00:20:27.060 killing process with pid 78165 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78165' 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@969 -- # kill 78165 00:20:27.060 17:11:19 ftl -- common/autotest_common.sh@974 -- # wait 78165 00:20:28.957 17:11:21 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:28.957 17:11:21 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:28.957 17:11:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:28.957 17:11:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.957 17:11:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:28.957 ************************************ 00:20:28.957 START TEST ftl_fio_basic 00:20:28.957 ************************************ 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:28.957 * Looking for test storage... 00:20:28.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78295 00:20:28.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78295 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 78295 ']' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.957 17:11:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:29.216 [2024-07-25 17:11:21.512287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:29.216 [2024-07-25 17:11:21.512472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78295 ] 00:20:29.474 [2024-07-25 17:11:21.685188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.474 [2024-07-25 17:11:21.911072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.474 [2024-07-25 17:11:21.911216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.474 [2024-07-25 17:11:21.911229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:30.408 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:30.666 17:11:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:30.942 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:30.942 { 00:20:30.942 "name": "nvme0n1", 00:20:30.942 "aliases": [ 00:20:30.942 "7b8bc3d6-0313-436f-8641-6262eb7cf205" 00:20:30.942 ], 00:20:30.942 "product_name": "NVMe disk", 00:20:30.942 "block_size": 4096, 00:20:30.942 "num_blocks": 1310720, 00:20:30.942 "uuid": "7b8bc3d6-0313-436f-8641-6262eb7cf205", 00:20:30.942 "assigned_rate_limits": { 00:20:30.942 "rw_ios_per_sec": 0, 00:20:30.942 "rw_mbytes_per_sec": 0, 00:20:30.942 "r_mbytes_per_sec": 0, 00:20:30.942 "w_mbytes_per_sec": 0 00:20:30.942 }, 00:20:30.942 "claimed": false, 00:20:30.942 "zoned": false, 00:20:30.942 "supported_io_types": { 00:20:30.942 "read": true, 00:20:30.942 "write": true, 00:20:30.942 "unmap": true, 00:20:30.942 "flush": true, 00:20:30.942 "reset": true, 00:20:30.942 "nvme_admin": true, 00:20:30.942 "nvme_io": true, 00:20:30.942 "nvme_io_md": false, 00:20:30.943 "write_zeroes": true, 00:20:30.943 "zcopy": false, 00:20:30.943 "get_zone_info": false, 00:20:30.943 "zone_management": false, 00:20:30.943 "zone_append": false, 00:20:30.943 "compare": true, 00:20:30.943 "compare_and_write": false, 00:20:30.943 "abort": true, 00:20:30.943 "seek_hole": false, 00:20:30.943 "seek_data": false, 00:20:30.943 "copy": true, 00:20:30.943 "nvme_iov_md": false 00:20:30.943 }, 00:20:30.943 "driver_specific": { 00:20:30.943 "nvme": [ 00:20:30.943 { 00:20:30.943 "pci_address": "0000:00:11.0", 00:20:30.943 "trid": { 00:20:30.943 "trtype": "PCIe", 00:20:30.943 "traddr": "0000:00:11.0" 00:20:30.943 }, 00:20:30.943 "ctrlr_data": { 00:20:30.943 "cntlid": 0, 00:20:30.943 "vendor_id": "0x1b36", 00:20:30.943 "model_number": "QEMU NVMe Ctrl", 00:20:30.943 "serial_number": "12341", 00:20:30.943 "firmware_revision": "8.0.0", 00:20:30.943 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:30.943 "oacs": { 00:20:30.943 "security": 0, 00:20:30.943 "format": 1, 00:20:30.943 "firmware": 0, 00:20:30.943 "ns_manage": 1 00:20:30.943 }, 00:20:30.943 "multi_ctrlr": false, 00:20:30.943 "ana_reporting": false 00:20:30.943 }, 00:20:30.943 "vs": { 00:20:30.943 "nvme_version": "1.4" 00:20:30.943 }, 00:20:30.943 "ns_data": { 00:20:30.943 "id": 1, 00:20:30.943 "can_share": false 00:20:30.943 } 00:20:30.943 } 00:20:30.943 ], 00:20:30.943 "mp_policy": "active_passive" 00:20:30.943 } 00:20:30.943 } 00:20:30.943 ]' 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:30.943 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:31.200 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:31.200 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:31.458 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=31cf0704-2b15-4d23-9d22-da781be200cd 00:20:31.458 17:11:23 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 31cf0704-2b15-4d23-9d22-da781be200cd 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:31.716 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:31.974 { 00:20:31.974 "name": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:31.974 "aliases": [ 00:20:31.974 "lvs/nvme0n1p0" 00:20:31.974 ], 00:20:31.974 "product_name": "Logical Volume", 00:20:31.974 "block_size": 4096, 00:20:31.974 "num_blocks": 26476544, 00:20:31.974 "uuid": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:31.974 "assigned_rate_limits": { 00:20:31.974 "rw_ios_per_sec": 0, 00:20:31.974 "rw_mbytes_per_sec": 0, 00:20:31.974 "r_mbytes_per_sec": 0, 00:20:31.974 "w_mbytes_per_sec": 0 00:20:31.974 }, 00:20:31.974 "claimed": false, 00:20:31.974 "zoned": false, 00:20:31.974 "supported_io_types": { 00:20:31.974 "read": true, 00:20:31.974 "write": true, 00:20:31.974 "unmap": true, 00:20:31.974 "flush": false, 00:20:31.974 "reset": true, 00:20:31.974 "nvme_admin": false, 00:20:31.974 "nvme_io": false, 00:20:31.974 "nvme_io_md": false, 00:20:31.974 "write_zeroes": true, 00:20:31.974 "zcopy": false, 00:20:31.974 "get_zone_info": false, 00:20:31.974 "zone_management": false, 00:20:31.974 "zone_append": false, 00:20:31.974 "compare": false, 00:20:31.974 "compare_and_write": false, 00:20:31.974 "abort": false, 00:20:31.974 "seek_hole": true, 00:20:31.974 "seek_data": true, 00:20:31.974 "copy": false, 00:20:31.974 "nvme_iov_md": false 00:20:31.974 }, 00:20:31.974 "driver_specific": { 00:20:31.974 "lvol": { 00:20:31.974 "lvol_store_uuid": "31cf0704-2b15-4d23-9d22-da781be200cd", 00:20:31.974 "base_bdev": "nvme0n1", 00:20:31.974 "thin_provision": true, 00:20:31.974 "num_allocated_clusters": 0, 00:20:31.974 "snapshot": false, 00:20:31.974 "clone": false, 00:20:31.974 "esnap_clone": false 00:20:31.974 } 00:20:31.974 } 00:20:31.974 } 00:20:31.974 ]' 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:31.974 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:32.232 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:32.490 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:32.490 { 00:20:32.490 "name": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:32.490 "aliases": [ 00:20:32.490 "lvs/nvme0n1p0" 00:20:32.490 ], 00:20:32.490 "product_name": "Logical Volume", 00:20:32.490 "block_size": 4096, 00:20:32.490 "num_blocks": 26476544, 00:20:32.490 "uuid": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:32.490 "assigned_rate_limits": { 00:20:32.490 "rw_ios_per_sec": 0, 00:20:32.490 "rw_mbytes_per_sec": 0, 00:20:32.490 "r_mbytes_per_sec": 0, 00:20:32.490 "w_mbytes_per_sec": 0 00:20:32.490 }, 00:20:32.490 "claimed": false, 00:20:32.490 "zoned": false, 00:20:32.490 "supported_io_types": { 00:20:32.490 "read": true, 00:20:32.490 "write": true, 00:20:32.490 "unmap": true, 00:20:32.490 "flush": false, 00:20:32.490 "reset": true, 00:20:32.490 "nvme_admin": false, 00:20:32.490 "nvme_io": false, 00:20:32.490 "nvme_io_md": false, 00:20:32.490 "write_zeroes": true, 00:20:32.490 "zcopy": false, 00:20:32.490 "get_zone_info": false, 00:20:32.490 "zone_management": false, 00:20:32.490 "zone_append": false, 00:20:32.490 "compare": false, 00:20:32.490 "compare_and_write": false, 00:20:32.490 "abort": false, 00:20:32.490 "seek_hole": true, 00:20:32.490 "seek_data": true, 00:20:32.490 "copy": false, 00:20:32.490 "nvme_iov_md": false 00:20:32.490 }, 00:20:32.490 "driver_specific": { 00:20:32.490 "lvol": { 00:20:32.490 "lvol_store_uuid": "31cf0704-2b15-4d23-9d22-da781be200cd", 00:20:32.490 "base_bdev": "nvme0n1", 00:20:32.490 "thin_provision": true, 00:20:32.490 "num_allocated_clusters": 0, 00:20:32.490 "snapshot": false, 00:20:32.490 "clone": false, 00:20:32.490 "esnap_clone": false 00:20:32.490 } 00:20:32.490 } 00:20:32.490 } 00:20:32.490 ]' 00:20:32.490 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:32.490 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:32.490 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:32.748 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:32.748 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:32.748 17:11:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:32.748 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:32.748 17:11:24 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:32.748 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:32.748 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:32.749 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:32.749 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 00:20:33.007 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:33.007 { 00:20:33.007 "name": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:33.007 "aliases": [ 00:20:33.007 "lvs/nvme0n1p0" 00:20:33.007 ], 00:20:33.007 "product_name": "Logical Volume", 00:20:33.007 "block_size": 4096, 00:20:33.007 "num_blocks": 26476544, 00:20:33.007 "uuid": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:33.007 "assigned_rate_limits": { 00:20:33.007 "rw_ios_per_sec": 0, 00:20:33.007 "rw_mbytes_per_sec": 0, 00:20:33.007 "r_mbytes_per_sec": 0, 00:20:33.007 "w_mbytes_per_sec": 0 00:20:33.007 }, 00:20:33.007 "claimed": false, 00:20:33.007 "zoned": false, 00:20:33.007 "supported_io_types": { 00:20:33.007 "read": true, 00:20:33.007 "write": true, 00:20:33.007 "unmap": true, 00:20:33.007 "flush": false, 00:20:33.007 "reset": true, 00:20:33.007 "nvme_admin": false, 00:20:33.007 "nvme_io": false, 00:20:33.007 "nvme_io_md": false, 00:20:33.007 "write_zeroes": true, 00:20:33.007 "zcopy": false, 00:20:33.007 "get_zone_info": false, 00:20:33.007 "zone_management": false, 00:20:33.007 "zone_append": false, 00:20:33.007 "compare": false, 00:20:33.007 "compare_and_write": false, 00:20:33.007 "abort": false, 00:20:33.007 "seek_hole": true, 00:20:33.007 "seek_data": true, 00:20:33.007 "copy": false, 00:20:33.007 "nvme_iov_md": false 00:20:33.007 }, 00:20:33.007 "driver_specific": { 00:20:33.007 "lvol": { 00:20:33.007 "lvol_store_uuid": "31cf0704-2b15-4d23-9d22-da781be200cd", 00:20:33.008 "base_bdev": "nvme0n1", 00:20:33.008 "thin_provision": true, 00:20:33.008 "num_allocated_clusters": 0, 00:20:33.008 "snapshot": false, 00:20:33.008 "clone": false, 00:20:33.008 "esnap_clone": false 00:20:33.008 } 00:20:33.008 } 00:20:33.008 } 00:20:33.008 ]' 00:20:33.008 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:33.008 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:33.008 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:33.267 17:11:25 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2 -c nvc0n1p0 --l2p_dram_limit 60 00:20:33.267 [2024-07-25 17:11:25.686307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.686363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:33.267 [2024-07-25 17:11:25.686400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:33.267 [2024-07-25 17:11:25.686414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.686498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.686518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.267 [2024-07-25 17:11:25.686531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.267 [2024-07-25 17:11:25.686543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.686574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:33.267 [2024-07-25 17:11:25.687618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:33.267 [2024-07-25 17:11:25.687658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.687681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.267 [2024-07-25 17:11:25.687694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:20:33.267 [2024-07-25 17:11:25.687707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.687881] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f4be1415-89e9-4dbe-9361-4096457eff1c 00:20:33.267 [2024-07-25 17:11:25.689838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.689875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:33.267 [2024-07-25 17:11:25.689894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:33.267 [2024-07-25 17:11:25.689905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.699627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.699665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.267 [2024-07-25 17:11:25.699686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.577 ms 00:20:33.267 [2024-07-25 17:11:25.699697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.699860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.699896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.267 [2024-07-25 17:11:25.699915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:33.267 [2024-07-25 17:11:25.699927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.267 [2024-07-25 17:11:25.700045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.267 [2024-07-25 17:11:25.700065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:33.267 [2024-07-25 17:11:25.700081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:33.268 [2024-07-25 17:11:25.700094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.268 [2024-07-25 17:11:25.700175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.268 [2024-07-25 17:11:25.705214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.268 [2024-07-25 17:11:25.705257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.268 [2024-07-25 17:11:25.705272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.053 ms 00:20:33.268 [2024-07-25 17:11:25.705284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.268 [2024-07-25 17:11:25.705338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.268 [2024-07-25 17:11:25.705357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:33.268 [2024-07-25 17:11:25.705369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:33.268 [2024-07-25 17:11:25.705381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.268 [2024-07-25 17:11:25.705430] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:33.268 [2024-07-25 17:11:25.705594] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:33.268 [2024-07-25 17:11:25.705620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:33.268 [2024-07-25 17:11:25.705644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:33.268 [2024-07-25 17:11:25.705658] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:33.268 [2024-07-25 17:11:25.705673] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:33.268 [2024-07-25 17:11:25.705684] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:33.268 [2024-07-25 17:11:25.705696] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:33.268 [2024-07-25 17:11:25.705709] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:33.268 [2024-07-25 17:11:25.705721] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:33.268 [2024-07-25 17:11:25.705732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.268 [2024-07-25 17:11:25.705744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:33.268 [2024-07-25 17:11:25.705755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:33.268 [2024-07-25 17:11:25.705767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.268 [2024-07-25 17:11:25.705856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.268 [2024-07-25 17:11:25.705873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:33.268 [2024-07-25 17:11:25.705884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:33.268 [2024-07-25 17:11:25.705896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.268 [2024-07-25 17:11:25.706016] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:33.268 [2024-07-25 17:11:25.706040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:33.268 [2024-07-25 17:11:25.706052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:33.268 [2024-07-25 17:11:25.706086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:33.268 [2024-07-25 17:11:25.706117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.268 [2024-07-25 17:11:25.706139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:33.268 [2024-07-25 17:11:25.706151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:33.268 [2024-07-25 17:11:25.706160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:33.268 [2024-07-25 17:11:25.706172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:33.268 [2024-07-25 17:11:25.706181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:33.268 [2024-07-25 17:11:25.706192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:33.268 [2024-07-25 17:11:25.706215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:33.268 [2024-07-25 17:11:25.706245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:33.268 [2024-07-25 17:11:25.706277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:33.268 [2024-07-25 17:11:25.706306] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:33.268 [2024-07-25 17:11:25.706338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:33.268 [2024-07-25 17:11:25.706367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.268 [2024-07-25 17:11:25.706390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:33.268 [2024-07-25 17:11:25.706403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:33.268 [2024-07-25 17:11:25.706412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:33.268 [2024-07-25 17:11:25.706424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:33.268 [2024-07-25 17:11:25.706433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:33.268 [2024-07-25 17:11:25.706444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:33.268 [2024-07-25 17:11:25.706471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:33.268 [2024-07-25 17:11:25.706481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706492] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:33.268 [2024-07-25 17:11:25.706503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:33.268 [2024-07-25 17:11:25.706534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:33.268 [2024-07-25 17:11:25.706558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:33.268 [2024-07-25 17:11:25.706569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:33.268 [2024-07-25 17:11:25.706584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:33.268 [2024-07-25 17:11:25.706594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:33.268 [2024-07-25 17:11:25.706605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:33.268 [2024-07-25 17:11:25.706615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:33.268 [2024-07-25 17:11:25.706640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:33.268 [2024-07-25 17:11:25.706672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.268 [2024-07-25 17:11:25.706689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:33.268 [2024-07-25 17:11:25.706700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:33.268 [2024-07-25 17:11:25.706714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:33.268 [2024-07-25 17:11:25.706725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:33.268 [2024-07-25 17:11:25.706738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:33.268 [2024-07-25 17:11:25.706749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:33.268 [2024-07-25 17:11:25.706762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:33.268 [2024-07-25 17:11:25.706772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:33.268 [2024-07-25 17:11:25.706785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:33.268 [2024-07-25 17:11:25.706796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:33.268 [2024-07-25 17:11:25.706810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:33.269 [2024-07-25 17:11:25.706821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:33.269 [2024-07-25 17:11:25.706839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:33.269 [2024-07-25 17:11:25.706850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:33.269 [2024-07-25 17:11:25.706863] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:33.269 [2024-07-25 17:11:25.706875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:33.269 [2024-07-25 17:11:25.706888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:33.269 [2024-07-25 17:11:25.706899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:33.269 [2024-07-25 17:11:25.706917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:33.269 [2024-07-25 17:11:25.706929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:33.269 [2024-07-25 17:11:25.706944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.269 [2024-07-25 17:11:25.706971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:33.269 [2024-07-25 17:11:25.706984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:20:33.269 [2024-07-25 17:11:25.706995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.269 [2024-07-25 17:11:25.707456] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:33.269 [2024-07-25 17:11:25.707541] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:36.591 [2024-07-25 17:11:28.584894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.585862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:36.591 [2024-07-25 17:11:28.586186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2877.445 ms 00:20:36.591 [2024-07-25 17:11:28.586440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.622924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.623479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.591 [2024-07-25 17:11:28.623729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.888 ms 00:20:36.591 [2024-07-25 17:11:28.623892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.624366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.624663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.591 [2024-07-25 17:11:28.624759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:36.591 [2024-07-25 17:11:28.624839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.678177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.678519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.591 [2024-07-25 17:11:28.678683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.185 ms 00:20:36.591 [2024-07-25 17:11:28.678774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.678927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.679227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.591 [2024-07-25 17:11:28.679369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:36.591 [2024-07-25 17:11:28.679396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.680705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.681012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.591 [2024-07-25 17:11:28.681294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:20:36.591 [2024-07-25 17:11:28.681591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.681905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.682171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.591 [2024-07-25 17:11:28.682335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:36.591 [2024-07-25 17:11:28.682483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.708271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.708381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.591 [2024-07-25 17:11:28.708471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.530 ms 00:20:36.591 [2024-07-25 17:11:28.708707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.723311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:36.591 [2024-07-25 17:11:28.750534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.750776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.591 [2024-07-25 17:11:28.750865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.614 ms 00:20:36.591 [2024-07-25 17:11:28.750939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.814007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.814513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:36.591 [2024-07-25 17:11:28.814552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.893 ms 00:20:36.591 [2024-07-25 17:11:28.814570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.814891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.814916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.591 [2024-07-25 17:11:28.814946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:20:36.591 [2024-07-25 17:11:28.814964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.842465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.842550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:36.591 [2024-07-25 17:11:28.842569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.357 ms 00:20:36.591 [2024-07-25 17:11:28.842584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.869644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.869708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:36.591 [2024-07-25 17:11:28.869726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:20:36.591 [2024-07-25 17:11:28.869739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.870841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.870882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.591 [2024-07-25 17:11:28.870915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:20:36.591 [2024-07-25 17:11:28.870958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.961220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.961286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:36.591 [2024-07-25 17:11:28.961318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.149 ms 00:20:36.591 [2024-07-25 17:11:28.961352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:28.993222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:28.993286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:36.591 [2024-07-25 17:11:28.993304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.780 ms 00:20:36.591 [2024-07-25 17:11:28.993322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:29.021698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:29.021772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:36.591 [2024-07-25 17:11:29.021790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.321 ms 00:20:36.591 [2024-07-25 17:11:29.021803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:29.051905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:29.051992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.591 [2024-07-25 17:11:29.052013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.048 ms 00:20:36.591 [2024-07-25 17:11:29.052029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:29.052095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:29.052119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.591 [2024-07-25 17:11:29.052133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.591 [2024-07-25 17:11:29.052151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:29.052346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.591 [2024-07-25 17:11:29.052372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.591 [2024-07-25 17:11:29.052387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:36.591 [2024-07-25 17:11:29.052401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.591 [2024-07-25 17:11:29.054063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3367.110 ms, result 0 00:20:36.850 { 00:20:36.850 "name": "ftl0", 00:20:36.850 "uuid": "f4be1415-89e9-4dbe-9361-4096457eff1c" 00:20:36.850 } 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:36.850 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:37.109 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:37.368 [ 00:20:37.368 { 00:20:37.368 "name": "ftl0", 00:20:37.368 "aliases": [ 00:20:37.368 "f4be1415-89e9-4dbe-9361-4096457eff1c" 00:20:37.368 ], 00:20:37.368 "product_name": "FTL disk", 00:20:37.368 "block_size": 4096, 00:20:37.368 "num_blocks": 20971520, 00:20:37.368 "uuid": "f4be1415-89e9-4dbe-9361-4096457eff1c", 00:20:37.368 "assigned_rate_limits": { 00:20:37.368 "rw_ios_per_sec": 0, 00:20:37.368 "rw_mbytes_per_sec": 0, 00:20:37.368 "r_mbytes_per_sec": 0, 00:20:37.368 "w_mbytes_per_sec": 0 00:20:37.368 }, 00:20:37.368 "claimed": false, 00:20:37.368 "zoned": false, 00:20:37.368 "supported_io_types": { 00:20:37.368 "read": true, 00:20:37.368 "write": true, 00:20:37.368 "unmap": true, 00:20:37.368 "flush": true, 00:20:37.368 "reset": false, 00:20:37.368 "nvme_admin": false, 00:20:37.368 "nvme_io": false, 00:20:37.368 "nvme_io_md": false, 00:20:37.368 "write_zeroes": true, 00:20:37.368 "zcopy": false, 00:20:37.368 "get_zone_info": false, 00:20:37.368 "zone_management": false, 00:20:37.368 "zone_append": false, 00:20:37.368 "compare": false, 00:20:37.368 "compare_and_write": false, 00:20:37.368 "abort": false, 00:20:37.368 "seek_hole": false, 00:20:37.368 "seek_data": false, 00:20:37.368 "copy": false, 00:20:37.368 "nvme_iov_md": false 00:20:37.368 }, 00:20:37.368 "driver_specific": { 00:20:37.368 "ftl": { 00:20:37.368 "base_bdev": "6f8bd3e4-7b5d-44d0-8d76-bea9b2b8a2b2", 00:20:37.368 "cache": "nvc0n1p0" 00:20:37.368 } 00:20:37.368 } 00:20:37.368 } 00:20:37.368 ] 00:20:37.368 17:11:29 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:20:37.368 17:11:29 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:37.368 17:11:29 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:37.627 17:11:29 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:37.627 17:11:29 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:37.627 [2024-07-25 17:11:30.058408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.627 [2024-07-25 17:11:30.058483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.628 [2024-07-25 17:11:30.058527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.628 [2024-07-25 17:11:30.058540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.628 [2024-07-25 17:11:30.058590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.628 [2024-07-25 17:11:30.062458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.628 [2024-07-25 17:11:30.062497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.628 [2024-07-25 17:11:30.062530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.844 ms 00:20:37.628 [2024-07-25 17:11:30.062544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.628 [2024-07-25 17:11:30.063105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.628 [2024-07-25 17:11:30.063151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.628 [2024-07-25 17:11:30.063171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:20:37.628 [2024-07-25 17:11:30.063191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.628 [2024-07-25 17:11:30.066404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.628 [2024-07-25 17:11:30.066452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.628 [2024-07-25 17:11:30.066484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:20:37.628 [2024-07-25 17:11:30.066497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.628 [2024-07-25 17:11:30.072738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.628 [2024-07-25 17:11:30.072775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.628 [2024-07-25 17:11:30.072807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:20:37.628 [2024-07-25 17:11:30.072824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.104526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.104709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.888 [2024-07-25 17:11:30.104738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.630 ms 00:20:37.888 [2024-07-25 17:11:30.104754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.124743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.124814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.888 [2024-07-25 17:11:30.124833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.924 ms 00:20:37.888 [2024-07-25 17:11:30.124847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.125133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.125164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.888 [2024-07-25 17:11:30.125180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:37.888 [2024-07-25 17:11:30.125194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.154625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.154692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:37.888 [2024-07-25 17:11:30.154710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.399 ms 00:20:37.888 [2024-07-25 17:11:30.154725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.182821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.182886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:37.888 [2024-07-25 17:11:30.182903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.027 ms 00:20:37.888 [2024-07-25 17:11:30.182916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.210401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.210469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.888 [2024-07-25 17:11:30.210486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.432 ms 00:20:37.888 [2024-07-25 17:11:30.210500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.238224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.888 [2024-07-25 17:11:30.238276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.888 [2024-07-25 17:11:30.238294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.566 ms 00:20:37.888 [2024-07-25 17:11:30.238307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.888 [2024-07-25 17:11:30.238364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.888 [2024-07-25 17:11:30.238393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.238988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.888 [2024-07-25 17:11:30.239303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.889 [2024-07-25 17:11:30.239933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.889 [2024-07-25 17:11:30.239945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4be1415-89e9-4dbe-9361-4096457eff1c 00:20:37.889 [2024-07-25 17:11:30.239964] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.889 [2024-07-25 17:11:30.239986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.889 [2024-07-25 17:11:30.240006] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.889 [2024-07-25 17:11:30.240019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.889 [2024-07-25 17:11:30.240032] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.889 [2024-07-25 17:11:30.240043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.889 [2024-07-25 17:11:30.240063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.889 [2024-07-25 17:11:30.240075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.889 [2024-07-25 17:11:30.240088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.889 [2024-07-25 17:11:30.240100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.889 [2024-07-25 17:11:30.240114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.889 [2024-07-25 17:11:30.240126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:20:37.889 [2024-07-25 17:11:30.240140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.257121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.889 [2024-07-25 17:11:30.257183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.889 [2024-07-25 17:11:30.257201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.902 ms 00:20:37.889 [2024-07-25 17:11:30.257214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.257735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.889 [2024-07-25 17:11:30.257772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.889 [2024-07-25 17:11:30.257789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:20:37.889 [2024-07-25 17:11:30.257806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.315528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-25 17:11:30.315580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.889 [2024-07-25 17:11:30.315615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-25 17:11:30.315629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.315708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-25 17:11:30.315727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.889 [2024-07-25 17:11:30.315739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-25 17:11:30.315756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.315926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-25 17:11:30.315959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.889 [2024-07-25 17:11:30.315973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-25 17:11:30.316036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.889 [2024-07-25 17:11:30.316089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.889 [2024-07-25 17:11:30.316111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.889 [2024-07-25 17:11:30.316123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.889 [2024-07-25 17:11:30.316138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.147 [2024-07-25 17:11:30.414182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.414282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.148 [2024-07-25 17:11:30.414303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.414317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.490390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.490480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.148 [2024-07-25 17:11:30.490501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.490515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.490666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.490691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.148 [2024-07-25 17:11:30.490705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.490719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.490834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.490864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.148 [2024-07-25 17:11:30.490877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.490890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.491377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.491441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.148 [2024-07-25 17:11:30.491484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.491603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.491732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.491894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.148 [2024-07-25 17:11:30.492027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.492060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.492130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.492155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.148 [2024-07-25 17:11:30.492168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.492182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.492267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.148 [2024-07-25 17:11:30.492305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.148 [2024-07-25 17:11:30.492317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.148 [2024-07-25 17:11:30.492329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.148 [2024-07-25 17:11:30.492534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 434.101 ms, result 0 00:20:38.148 true 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78295 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 78295 ']' 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 78295 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78295 00:20:38.148 killing process with pid 78295 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78295' 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 78295 00:20:38.148 17:11:30 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 78295 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:43.412 17:11:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:43.412 17:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:43.412 17:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:43.412 17:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:43.412 17:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.412 17:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:43.412 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:43.412 fio-3.35 00:20:43.412 Starting 1 thread 00:20:48.680 00:20:48.680 test: (groupid=0, jobs=1): err= 0: pid=78501: Thu Jul 25 17:11:40 2024 00:20:48.680 read: IOPS=886, BW=58.9MiB/s (61.8MB/s)(255MiB/4322msec) 00:20:48.680 slat (nsec): min=5330, max=58085, avg=7524.21, stdev=3622.46 00:20:48.680 clat (usec): min=357, max=1374, avg=499.97, stdev=48.98 00:20:48.680 lat (usec): min=363, max=1398, avg=507.49, stdev=49.93 00:20:48.681 clat percentiles (usec): 00:20:48.681 | 1.00th=[ 420], 5.00th=[ 445], 10.00th=[ 457], 20.00th=[ 465], 00:20:48.681 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:20:48.681 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 594], 00:20:48.681 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 758], 99.95th=[ 775], 00:20:48.681 | 99.99th=[ 1369] 00:20:48.681 write: IOPS=893, BW=59.3MiB/s (62.2MB/s)(256MiB/4318msec); 0 zone resets 00:20:48.681 slat (usec): min=18, max=140, avg=24.01, stdev= 6.98 00:20:48.681 clat (usec): min=414, max=1853, avg=578.40, stdev=65.75 00:20:48.681 lat (usec): min=435, max=1874, avg=602.41, stdev=66.30 00:20:48.681 clat percentiles (usec): 00:20:48.681 | 1.00th=[ 474], 5.00th=[ 498], 10.00th=[ 519], 20.00th=[ 537], 00:20:48.681 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:20:48.681 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:20:48.681 | 99.00th=[ 865], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 1287], 00:20:48.681 | 99.99th=[ 1860] 00:20:48.681 bw ( KiB/s): min=59160, max=61472, per=99.97%, avg=60707.00, stdev=799.44, samples=8 00:20:48.681 iops : min= 870, max= 904, avg=892.75, stdev=11.76, samples=8 00:20:48.681 lat (usec) : 500=33.80%, 750=65.17%, 1000=0.99% 00:20:48.681 lat (msec) : 2=0.04% 00:20:48.681 cpu : usr=99.07%, sys=0.16%, ctx=6, majf=0, minf=1171 00:20:48.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.681 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:48.681 00:20:48.681 Run status group 0 (all jobs): 00:20:48.681 READ: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=255MiB (267MB), run=4322-4322msec 00:20:48.681 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=256MiB (269MB), run=4318-4318msec 00:20:50.055 ----------------------------------------------------- 00:20:50.055 Suppressions used: 00:20:50.055 count bytes template 00:20:50.055 1 5 /usr/src/fio/parse.c 00:20:50.055 1 8 libtcmalloc_minimal.so 00:20:50.055 1 904 libcrypto.so 00:20:50.055 ----------------------------------------------------- 00:20:50.055 00:20:50.055 17:11:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:50.055 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:50.055 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:50.055 17:11:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:50.055 17:11:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:50.056 17:11:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:50.056 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:50.056 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:50.056 fio-3.35 00:20:50.056 Starting 2 threads 00:21:22.189 00:21:22.189 first_half: (groupid=0, jobs=1): err= 0: pid=78604: Thu Jul 25 17:12:11 2024 00:21:22.189 read: IOPS=2382, BW=9531KiB/s (9759kB/s)(256MiB/27480msec) 00:21:22.189 slat (usec): min=4, max=138, avg= 7.58, stdev= 2.61 00:21:22.189 clat (usec): min=714, max=329244, avg=45687.18, stdev=26986.68 00:21:22.189 lat (usec): min=719, max=329282, avg=45694.76, stdev=26986.92 00:21:22.189 clat percentiles (msec): 00:21:22.189 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 39], 00:21:22.189 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:21:22.189 | 70.00th=[ 42], 80.00th=[ 45], 90.00th=[ 47], 95.00th=[ 87], 00:21:22.189 | 99.00th=[ 186], 99.50th=[ 199], 99.90th=[ 245], 99.95th=[ 292], 00:21:22.189 | 99.99th=[ 321] 00:21:22.189 write: IOPS=2389, BW=9556KiB/s (9786kB/s)(256MiB/27431msec); 0 zone resets 00:21:22.189 slat (usec): min=5, max=148, avg= 8.74, stdev= 5.22 00:21:22.189 clat (usec): min=467, max=46510, avg=7990.08, stdev=7982.99 00:21:22.189 lat (usec): min=476, max=46518, avg=7998.82, stdev=7983.15 00:21:22.189 clat percentiles (usec): 00:21:22.189 | 1.00th=[ 1074], 5.00th=[ 1467], 10.00th=[ 1778], 20.00th=[ 3326], 00:21:22.189 | 30.00th=[ 4228], 40.00th=[ 5342], 50.00th=[ 5932], 60.00th=[ 6915], 00:21:22.189 | 70.00th=[ 7504], 80.00th=[ 8979], 90.00th=[15270], 95.00th=[27395], 00:21:22.189 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:21:22.189 | 99.99th=[45876] 00:21:22.189 bw ( KiB/s): min= 456, max=51536, per=100.00%, avg=21692.67, stdev=12922.29, samples=24 00:21:22.189 iops : min= 114, max=12884, avg=5423.17, stdev=3230.57, samples=24 00:21:22.189 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.30% 00:21:22.189 lat (msec) : 2=5.80%, 4=7.58%, 10=27.61%, 20=6.89%, 50=47.50% 00:21:22.189 lat (msec) : 100=2.05%, 250=2.17%, 500=0.05% 00:21:22.189 cpu : usr=98.87%, sys=0.35%, ctx=63, majf=0, minf=5534 00:21:22.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:22.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.189 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.189 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.189 second_half: (groupid=0, jobs=1): err= 0: pid=78605: Thu Jul 25 17:12:11 2024 00:21:22.189 read: IOPS=2404, BW=9619KiB/s (9849kB/s)(256MiB/27234msec) 00:21:22.189 slat (nsec): min=4396, max=60619, avg=7992.31, stdev=3022.89 00:21:22.189 clat (msec): min=11, max=236, avg=45.99, stdev=23.76 00:21:22.189 lat (msec): min=11, max=236, avg=46.00, stdev=23.76 00:21:22.189 clat percentiles (msec): 00:21:22.189 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 39], 00:21:22.189 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:21:22.189 | 70.00th=[ 42], 80.00th=[ 45], 90.00th=[ 50], 95.00th=[ 80], 00:21:22.189 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 205], 99.95th=[ 211], 00:21:22.189 | 99.99th=[ 224] 00:21:22.189 write: IOPS=2420, BW=9680KiB/s (9913kB/s)(256MiB/27080msec); 0 zone resets 00:21:22.189 slat (usec): min=5, max=771, avg= 8.95, stdev= 6.92 00:21:22.189 clat (usec): min=493, max=43253, avg=7209.09, stdev=4769.38 00:21:22.189 lat (usec): min=502, max=43292, avg=7218.04, stdev=4769.59 00:21:22.189 clat percentiles (usec): 00:21:22.189 | 1.00th=[ 1369], 5.00th=[ 2114], 10.00th=[ 2966], 20.00th=[ 3851], 00:21:22.189 | 30.00th=[ 4817], 40.00th=[ 5538], 50.00th=[ 6128], 60.00th=[ 6980], 00:21:22.189 | 70.00th=[ 7439], 80.00th=[ 8848], 90.00th=[13566], 95.00th=[15664], 00:21:22.189 | 99.00th=[27919], 99.50th=[31851], 99.90th=[38536], 99.95th=[41157], 00:21:22.189 | 99.99th=[42206] 00:21:22.189 bw ( KiB/s): min= 8, max=42120, per=100.00%, avg=23736.23, stdev=14754.99, samples=22 00:21:22.189 iops : min= 2, max=10530, avg=5934.05, stdev=3688.74, samples=22 00:21:22.189 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.10% 00:21:22.189 lat (msec) : 2=1.96%, 4=9.19%, 10=30.23%, 20=7.48%, 50=46.44% 00:21:22.189 lat (msec) : 100=2.50%, 250=2.05% 00:21:22.189 cpu : usr=98.87%, sys=0.37%, ctx=64, majf=0, minf=5581 00:21:22.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:22.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.189 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.189 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.189 00:21:22.189 Run status group 0 (all jobs): 00:21:22.189 READ: bw=18.6MiB/s (19.5MB/s), 9531KiB/s-9619KiB/s (9759kB/s-9849kB/s), io=512MiB (536MB), run=27234-27480msec 00:21:22.189 WRITE: bw=18.7MiB/s (19.6MB/s), 9556KiB/s-9680KiB/s (9786kB/s-9913kB/s), io=512MiB (537MB), run=27080-27431msec 00:21:22.189 ----------------------------------------------------- 00:21:22.189 Suppressions used: 00:21:22.189 count bytes template 00:21:22.189 2 10 /usr/src/fio/parse.c 00:21:22.189 2 192 /usr/src/fio/iolog.c 00:21:22.189 1 8 libtcmalloc_minimal.so 00:21:22.189 1 904 libcrypto.so 00:21:22.189 ----------------------------------------------------- 00:21:22.189 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:22.189 17:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:22.189 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:22.189 fio-3.35 00:21:22.189 Starting 1 thread 00:21:40.312 00:21:40.312 test: (groupid=0, jobs=1): err= 0: pid=78946: Thu Jul 25 17:12:31 2024 00:21:40.313 read: IOPS=5910, BW=23.1MiB/s (24.2MB/s)(255MiB/11032msec) 00:21:40.313 slat (nsec): min=4301, max=55185, avg=6944.82, stdev=2950.69 00:21:40.313 clat (usec): min=878, max=42699, avg=21645.46, stdev=1144.30 00:21:40.313 lat (usec): min=883, max=42707, avg=21652.41, stdev=1144.31 00:21:40.313 clat percentiles (usec): 00:21:40.313 | 1.00th=[19792], 5.00th=[20317], 10.00th=[20579], 20.00th=[21103], 00:21:40.313 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21627], 60.00th=[21890], 00:21:40.313 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22938], 00:21:40.313 | 99.00th=[26346], 99.50th=[26608], 99.90th=[31589], 99.95th=[36963], 00:21:40.313 | 99.99th=[41681] 00:21:40.313 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(256MiB/5554msec); 0 zone resets 00:21:40.313 slat (usec): min=5, max=582, avg= 9.44, stdev= 6.32 00:21:40.313 clat (usec): min=667, max=73736, avg=10789.96, stdev=13519.46 00:21:40.313 lat (usec): min=675, max=73747, avg=10799.41, stdev=13519.50 00:21:40.313 clat percentiles (usec): 00:21:40.313 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1467], 00:21:40.313 | 30.00th=[ 1680], 40.00th=[ 2147], 50.00th=[ 7242], 60.00th=[ 8160], 00:21:40.313 | 70.00th=[ 9372], 80.00th=[10945], 90.00th=[39584], 95.00th=[41681], 00:21:40.313 | 99.00th=[45876], 99.50th=[48497], 99.90th=[55837], 99.95th=[60556], 00:21:40.313 | 99.99th=[71828] 00:21:40.313 bw ( KiB/s): min= 2512, max=63104, per=92.57%, avg=43690.67, stdev=15541.70, samples=12 00:21:40.313 iops : min= 628, max=15776, avg=10922.67, stdev=3885.43, samples=12 00:21:40.313 lat (usec) : 750=0.01%, 1000=0.77% 00:21:40.313 lat (msec) : 2=18.49%, 4=1.67%, 10=16.20%, 20=6.06%, 50=56.65% 00:21:40.313 lat (msec) : 100=0.14% 00:21:40.313 cpu : usr=98.63%, sys=0.57%, ctx=30, majf=0, minf=5567 00:21:40.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:40.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.313 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.313 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.313 00:21:40.313 Run status group 0 (all jobs): 00:21:40.313 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=255MiB (267MB), run=11032-11032msec 00:21:40.313 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=256MiB (268MB), run=5554-5554msec 00:21:40.571 ----------------------------------------------------- 00:21:40.571 Suppressions used: 00:21:40.571 count bytes template 00:21:40.571 1 5 /usr/src/fio/parse.c 00:21:40.571 2 192 /usr/src/fio/iolog.c 00:21:40.571 1 8 libtcmalloc_minimal.so 00:21:40.571 1 904 libcrypto.so 00:21:40.571 ----------------------------------------------------- 00:21:40.571 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:40.830 Remove shared memory files 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62306 /dev/shm/spdk_tgt_trace.pid77238 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:40.830 ************************************ 00:21:40.830 END TEST ftl_fio_basic 00:21:40.830 ************************************ 00:21:40.830 00:21:40.830 real 1m11.843s 00:21:40.830 user 2m38.089s 00:21:40.830 sys 0m4.006s 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.830 17:12:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:40.830 17:12:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:40.830 17:12:33 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:40.830 17:12:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.830 17:12:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:40.830 ************************************ 00:21:40.830 START TEST ftl_bdevperf 00:21:40.830 ************************************ 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:40.830 * Looking for test storage... 00:21:40.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79206 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79206 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 79206 ']' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.830 17:12:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.091 [2024-07-25 17:12:33.390603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:41.091 [2024-07-25 17:12:33.390825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79206 ] 00:21:41.350 [2024-07-25 17:12:33.566445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.350 [2024-07-25 17:12:33.757201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:41.916 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:42.483 { 00:21:42.483 "name": "nvme0n1", 00:21:42.483 "aliases": [ 00:21:42.483 "63dcc0f0-d9ff-4c60-8c5a-a856841e5abd" 00:21:42.483 ], 00:21:42.483 "product_name": "NVMe disk", 00:21:42.483 "block_size": 4096, 00:21:42.483 "num_blocks": 1310720, 00:21:42.483 "uuid": "63dcc0f0-d9ff-4c60-8c5a-a856841e5abd", 00:21:42.483 "assigned_rate_limits": { 00:21:42.483 "rw_ios_per_sec": 0, 00:21:42.483 "rw_mbytes_per_sec": 0, 00:21:42.483 "r_mbytes_per_sec": 0, 00:21:42.483 "w_mbytes_per_sec": 0 00:21:42.483 }, 00:21:42.483 "claimed": true, 00:21:42.483 "claim_type": "read_many_write_one", 00:21:42.483 "zoned": false, 00:21:42.483 "supported_io_types": { 00:21:42.483 "read": true, 00:21:42.483 "write": true, 00:21:42.483 "unmap": true, 00:21:42.483 "flush": true, 00:21:42.483 "reset": true, 00:21:42.483 "nvme_admin": true, 00:21:42.483 "nvme_io": true, 00:21:42.483 "nvme_io_md": false, 00:21:42.483 "write_zeroes": true, 00:21:42.483 "zcopy": false, 00:21:42.483 "get_zone_info": false, 00:21:42.483 "zone_management": false, 00:21:42.483 "zone_append": false, 00:21:42.483 "compare": true, 00:21:42.483 "compare_and_write": false, 00:21:42.483 "abort": true, 00:21:42.483 "seek_hole": false, 00:21:42.483 "seek_data": false, 00:21:42.483 "copy": true, 00:21:42.483 "nvme_iov_md": false 00:21:42.483 }, 00:21:42.483 "driver_specific": { 00:21:42.483 "nvme": [ 00:21:42.483 { 00:21:42.483 "pci_address": "0000:00:11.0", 00:21:42.483 "trid": { 00:21:42.483 "trtype": "PCIe", 00:21:42.483 "traddr": "0000:00:11.0" 00:21:42.483 }, 00:21:42.483 "ctrlr_data": { 00:21:42.483 "cntlid": 0, 00:21:42.483 "vendor_id": "0x1b36", 00:21:42.483 "model_number": "QEMU NVMe Ctrl", 00:21:42.483 "serial_number": "12341", 00:21:42.483 "firmware_revision": "8.0.0", 00:21:42.483 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:42.483 "oacs": { 00:21:42.483 "security": 0, 00:21:42.483 "format": 1, 00:21:42.483 "firmware": 0, 00:21:42.483 "ns_manage": 1 00:21:42.483 }, 00:21:42.483 "multi_ctrlr": false, 00:21:42.483 "ana_reporting": false 00:21:42.483 }, 00:21:42.483 "vs": { 00:21:42.483 "nvme_version": "1.4" 00:21:42.483 }, 00:21:42.483 "ns_data": { 00:21:42.483 "id": 1, 00:21:42.483 "can_share": false 00:21:42.483 } 00:21:42.483 } 00:21:42.483 ], 00:21:42.483 "mp_policy": "active_passive" 00:21:42.483 } 00:21:42.483 } 00:21:42.483 ]' 00:21:42.483 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:42.742 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:42.742 17:12:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:42.742 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:42.999 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=31cf0704-2b15-4d23-9d22-da781be200cd 00:21:42.999 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:42.999 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31cf0704-2b15-4d23-9d22-da781be200cd 00:21:42.999 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:43.256 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3c281ef3-b78a-4bd4-8564-e11e12056870 00:21:43.256 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3c281ef3-b78a-4bd4-8564-e11e12056870 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:43.513 17:12:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:43.785 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:43.785 { 00:21:43.785 "name": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:43.785 "aliases": [ 00:21:43.785 "lvs/nvme0n1p0" 00:21:43.785 ], 00:21:43.785 "product_name": "Logical Volume", 00:21:43.785 "block_size": 4096, 00:21:43.785 "num_blocks": 26476544, 00:21:43.785 "uuid": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:43.785 "assigned_rate_limits": { 00:21:43.785 "rw_ios_per_sec": 0, 00:21:43.785 "rw_mbytes_per_sec": 0, 00:21:43.785 "r_mbytes_per_sec": 0, 00:21:43.785 "w_mbytes_per_sec": 0 00:21:43.785 }, 00:21:43.785 "claimed": false, 00:21:43.785 "zoned": false, 00:21:43.785 "supported_io_types": { 00:21:43.785 "read": true, 00:21:43.785 "write": true, 00:21:43.785 "unmap": true, 00:21:43.785 "flush": false, 00:21:43.785 "reset": true, 00:21:43.785 "nvme_admin": false, 00:21:43.785 "nvme_io": false, 00:21:43.785 "nvme_io_md": false, 00:21:43.785 "write_zeroes": true, 00:21:43.785 "zcopy": false, 00:21:43.785 "get_zone_info": false, 00:21:43.785 "zone_management": false, 00:21:43.785 "zone_append": false, 00:21:43.785 "compare": false, 00:21:43.785 "compare_and_write": false, 00:21:43.785 "abort": false, 00:21:43.785 "seek_hole": true, 00:21:43.785 "seek_data": true, 00:21:43.785 "copy": false, 00:21:43.785 "nvme_iov_md": false 00:21:43.785 }, 00:21:43.785 "driver_specific": { 00:21:43.785 "lvol": { 00:21:43.785 "lvol_store_uuid": "3c281ef3-b78a-4bd4-8564-e11e12056870", 00:21:43.785 "base_bdev": "nvme0n1", 00:21:43.785 "thin_provision": true, 00:21:43.785 "num_allocated_clusters": 0, 00:21:43.785 "snapshot": false, 00:21:43.785 "clone": false, 00:21:43.785 "esnap_clone": false 00:21:43.785 } 00:21:43.785 } 00:21:43.785 } 00:21:43.785 ]' 00:21:43.785 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:43.786 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:43.786 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:44.081 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.347 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:44.347 { 00:21:44.347 "name": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:44.347 "aliases": [ 00:21:44.347 "lvs/nvme0n1p0" 00:21:44.347 ], 00:21:44.347 "product_name": "Logical Volume", 00:21:44.347 "block_size": 4096, 00:21:44.347 "num_blocks": 26476544, 00:21:44.347 "uuid": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:44.347 "assigned_rate_limits": { 00:21:44.347 "rw_ios_per_sec": 0, 00:21:44.347 "rw_mbytes_per_sec": 0, 00:21:44.347 "r_mbytes_per_sec": 0, 00:21:44.347 "w_mbytes_per_sec": 0 00:21:44.347 }, 00:21:44.347 "claimed": false, 00:21:44.347 "zoned": false, 00:21:44.347 "supported_io_types": { 00:21:44.347 "read": true, 00:21:44.347 "write": true, 00:21:44.347 "unmap": true, 00:21:44.347 "flush": false, 00:21:44.347 "reset": true, 00:21:44.347 "nvme_admin": false, 00:21:44.347 "nvme_io": false, 00:21:44.347 "nvme_io_md": false, 00:21:44.347 "write_zeroes": true, 00:21:44.347 "zcopy": false, 00:21:44.347 "get_zone_info": false, 00:21:44.347 "zone_management": false, 00:21:44.347 "zone_append": false, 00:21:44.347 "compare": false, 00:21:44.347 "compare_and_write": false, 00:21:44.347 "abort": false, 00:21:44.347 "seek_hole": true, 00:21:44.347 "seek_data": true, 00:21:44.347 "copy": false, 00:21:44.347 "nvme_iov_md": false 00:21:44.347 }, 00:21:44.347 "driver_specific": { 00:21:44.347 "lvol": { 00:21:44.347 "lvol_store_uuid": "3c281ef3-b78a-4bd4-8564-e11e12056870", 00:21:44.347 "base_bdev": "nvme0n1", 00:21:44.347 "thin_provision": true, 00:21:44.347 "num_allocated_clusters": 0, 00:21:44.347 "snapshot": false, 00:21:44.347 "clone": false, 00:21:44.347 "esnap_clone": false 00:21:44.347 } 00:21:44.347 } 00:21:44.347 } 00:21:44.347 ]' 00:21:44.347 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:44.347 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:44.347 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:44.604 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:44.604 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:44.604 17:12:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:44.604 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:44.604 17:12:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:44.604 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 195ea365-c8b3-4ae2-a8f4-bfb732859c57 00:21:44.863 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:44.863 { 00:21:44.863 "name": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:44.863 "aliases": [ 00:21:44.863 "lvs/nvme0n1p0" 00:21:44.863 ], 00:21:44.863 "product_name": "Logical Volume", 00:21:44.863 "block_size": 4096, 00:21:44.863 "num_blocks": 26476544, 00:21:44.863 "uuid": "195ea365-c8b3-4ae2-a8f4-bfb732859c57", 00:21:44.863 "assigned_rate_limits": { 00:21:44.863 "rw_ios_per_sec": 0, 00:21:44.863 "rw_mbytes_per_sec": 0, 00:21:44.863 "r_mbytes_per_sec": 0, 00:21:44.863 "w_mbytes_per_sec": 0 00:21:44.863 }, 00:21:44.863 "claimed": false, 00:21:44.863 "zoned": false, 00:21:44.863 "supported_io_types": { 00:21:44.863 "read": true, 00:21:44.863 "write": true, 00:21:44.863 "unmap": true, 00:21:44.863 "flush": false, 00:21:44.863 "reset": true, 00:21:44.863 "nvme_admin": false, 00:21:44.863 "nvme_io": false, 00:21:44.863 "nvme_io_md": false, 00:21:44.863 "write_zeroes": true, 00:21:44.863 "zcopy": false, 00:21:44.863 "get_zone_info": false, 00:21:44.863 "zone_management": false, 00:21:44.863 "zone_append": false, 00:21:44.863 "compare": false, 00:21:44.863 "compare_and_write": false, 00:21:44.863 "abort": false, 00:21:44.863 "seek_hole": true, 00:21:44.863 "seek_data": true, 00:21:44.863 "copy": false, 00:21:44.863 "nvme_iov_md": false 00:21:44.863 }, 00:21:44.863 "driver_specific": { 00:21:44.863 "lvol": { 00:21:44.863 "lvol_store_uuid": "3c281ef3-b78a-4bd4-8564-e11e12056870", 00:21:44.863 "base_bdev": "nvme0n1", 00:21:44.863 "thin_provision": true, 00:21:44.863 "num_allocated_clusters": 0, 00:21:44.863 "snapshot": false, 00:21:44.863 "clone": false, 00:21:44.863 "esnap_clone": false 00:21:44.863 } 00:21:44.863 } 00:21:44.863 } 00:21:44.863 ]' 00:21:44.863 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:44.863 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:21:44.863 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:45.122 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:45.122 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:45.122 17:12:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:21:45.122 17:12:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:21:45.122 17:12:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 195ea365-c8b3-4ae2-a8f4-bfb732859c57 -c nvc0n1p0 --l2p_dram_limit 20 00:21:45.122 [2024-07-25 17:12:37.556139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.556210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:45.122 [2024-07-25 17:12:37.556255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.122 [2024-07-25 17:12:37.556269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.556346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.556364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.122 [2024-07-25 17:12:37.556400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:45.122 [2024-07-25 17:12:37.556412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.556441] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:45.122 [2024-07-25 17:12:37.557429] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:45.122 [2024-07-25 17:12:37.557495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.557526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.122 [2024-07-25 17:12:37.557541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:21:45.122 [2024-07-25 17:12:37.557552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.557709] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 477a5d34-c267-4410-bff5-fa36eeecfb93 00:21:45.122 [2024-07-25 17:12:37.559718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.559760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:45.122 [2024-07-25 17:12:37.559797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:45.122 [2024-07-25 17:12:37.559811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.570002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.570070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.122 [2024-07-25 17:12:37.570089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.104 ms 00:21:45.122 [2024-07-25 17:12:37.570103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.570244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.570269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.122 [2024-07-25 17:12:37.570285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:45.122 [2024-07-25 17:12:37.570302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.570392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.570414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:45.122 [2024-07-25 17:12:37.570426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:45.122 [2024-07-25 17:12:37.570440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.570471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.122 [2024-07-25 17:12:37.575436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.575474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.122 [2024-07-25 17:12:37.575514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:21:45.122 [2024-07-25 17:12:37.575526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.575575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.575608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:45.122 [2024-07-25 17:12:37.575623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:45.122 [2024-07-25 17:12:37.575635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.122 [2024-07-25 17:12:37.575680] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:45.122 [2024-07-25 17:12:37.575867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:45.122 [2024-07-25 17:12:37.575893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:45.122 [2024-07-25 17:12:37.575909] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:45.122 [2024-07-25 17:12:37.575927] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:45.122 [2024-07-25 17:12:37.575941] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:45.122 [2024-07-25 17:12:37.575976] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:45.122 [2024-07-25 17:12:37.575988] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:45.122 [2024-07-25 17:12:37.576004] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:45.122 [2024-07-25 17:12:37.576015] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:45.122 [2024-07-25 17:12:37.576049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.122 [2024-07-25 17:12:37.576063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:45.122 [2024-07-25 17:12:37.576082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:21:45.122 [2024-07-25 17:12:37.576094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.123 [2024-07-25 17:12:37.576182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.123 [2024-07-25 17:12:37.576197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:45.123 [2024-07-25 17:12:37.576212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:45.123 [2024-07-25 17:12:37.576224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.123 [2024-07-25 17:12:37.576331] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:45.123 [2024-07-25 17:12:37.576348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:45.123 [2024-07-25 17:12:37.576363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:45.123 [2024-07-25 17:12:37.576404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:45.123 [2024-07-25 17:12:37.576442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.123 [2024-07-25 17:12:37.576465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:45.123 [2024-07-25 17:12:37.576476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:45.123 [2024-07-25 17:12:37.576489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.123 [2024-07-25 17:12:37.576500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:45.123 [2024-07-25 17:12:37.576515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:45.123 [2024-07-25 17:12:37.576526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:45.123 [2024-07-25 17:12:37.576552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:45.123 [2024-07-25 17:12:37.576603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:45.123 [2024-07-25 17:12:37.576644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:45.123 [2024-07-25 17:12:37.576680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:45.123 [2024-07-25 17:12:37.576714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:45.123 [2024-07-25 17:12:37.576762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.123 [2024-07-25 17:12:37.576786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:45.123 [2024-07-25 17:12:37.576798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:45.123 [2024-07-25 17:12:37.576811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.123 [2024-07-25 17:12:37.576822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:45.123 [2024-07-25 17:12:37.576837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:45.123 [2024-07-25 17:12:37.576847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:45.123 [2024-07-25 17:12:37.576872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:45.123 [2024-07-25 17:12:37.576885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576895] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:45.123 [2024-07-25 17:12:37.576910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:45.123 [2024-07-25 17:12:37.576921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.123 [2024-07-25 17:12:37.576935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.123 [2024-07-25 17:12:37.576947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:45.123 [2024-07-25 17:12:37.576963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:45.123 [2024-07-25 17:12:37.576974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:45.123 [2024-07-25 17:12:37.577022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:45.123 [2024-07-25 17:12:37.577033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:45.123 [2024-07-25 17:12:37.577048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:45.123 [2024-07-25 17:12:37.577065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:45.123 [2024-07-25 17:12:37.577083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:45.123 [2024-07-25 17:12:37.577110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:45.123 [2024-07-25 17:12:37.577123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:45.123 [2024-07-25 17:12:37.577137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:45.123 [2024-07-25 17:12:37.577149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:45.123 [2024-07-25 17:12:37.577163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:45.123 [2024-07-25 17:12:37.577174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:45.123 [2024-07-25 17:12:37.577189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:45.123 [2024-07-25 17:12:37.577206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:45.123 [2024-07-25 17:12:37.577226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:45.123 [2024-07-25 17:12:37.577292] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:45.123 [2024-07-25 17:12:37.577308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:45.123 [2024-07-25 17:12:37.577336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:45.123 [2024-07-25 17:12:37.577348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:45.123 [2024-07-25 17:12:37.577362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:45.123 [2024-07-25 17:12:37.577375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.123 [2024-07-25 17:12:37.577409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:45.123 [2024-07-25 17:12:37.577422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:21:45.123 [2024-07-25 17:12:37.577436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.123 [2024-07-25 17:12:37.577485] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:45.123 [2024-07-25 17:12:37.577508] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:48.443 [2024-07-25 17:12:40.648680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.648766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:48.443 [2024-07-25 17:12:40.648792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3071.209 ms 00:21:48.443 [2024-07-25 17:12:40.648807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.443 [2024-07-25 17:12:40.696320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.696612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.443 [2024-07-25 17:12:40.696659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.990 ms 00:21:48.443 [2024-07-25 17:12:40.696675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.443 [2024-07-25 17:12:40.696859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.696885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.443 [2024-07-25 17:12:40.696915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:48.443 [2024-07-25 17:12:40.696932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.443 [2024-07-25 17:12:40.736688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.736760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.443 [2024-07-25 17:12:40.736779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.703 ms 00:21:48.443 [2024-07-25 17:12:40.736795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.443 [2024-07-25 17:12:40.736835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.736865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.443 [2024-07-25 17:12:40.736878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:48.443 [2024-07-25 17:12:40.736893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.443 [2024-07-25 17:12:40.737646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.443 [2024-07-25 17:12:40.737702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.443 [2024-07-25 17:12:40.737720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:21:48.443 [2024-07-25 17:12:40.737734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.737918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.737956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.444 [2024-07-25 17:12:40.737989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:48.444 [2024-07-25 17:12:40.738016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.755111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.755169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.444 [2024-07-25 17:12:40.755186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.026 ms 00:21:48.444 [2024-07-25 17:12:40.755199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.767850] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:48.444 [2024-07-25 17:12:40.775511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.775545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.444 [2024-07-25 17:12:40.775581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.226 ms 00:21:48.444 [2024-07-25 17:12:40.775592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.855764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.855846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:48.444 [2024-07-25 17:12:40.855886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.136 ms 00:21:48.444 [2024-07-25 17:12:40.855909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.856380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.856427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.444 [2024-07-25 17:12:40.856450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:21:48.444 [2024-07-25 17:12:40.856463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.882095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.882134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:48.444 [2024-07-25 17:12:40.882170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.557 ms 00:21:48.444 [2024-07-25 17:12:40.882182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.906835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.906876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:48.444 [2024-07-25 17:12:40.906914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.622 ms 00:21:48.444 [2024-07-25 17:12:40.906925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.444 [2024-07-25 17:12:40.907855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.444 [2024-07-25 17:12:40.907885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.444 [2024-07-25 17:12:40.907936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:21:48.444 [2024-07-25 17:12:40.907964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:40.993165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:40.993227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:48.702 [2024-07-25 17:12:40.993270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.155 ms 00:21:48.702 [2024-07-25 17:12:40.993283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.022620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:41.022711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:48.702 [2024-07-25 17:12:41.022753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.279 ms 00:21:48.702 [2024-07-25 17:12:41.022769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.050055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:41.050111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:48.702 [2024-07-25 17:12:41.050149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.231 ms 00:21:48.702 [2024-07-25 17:12:41.050161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.076671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:41.076714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.702 [2024-07-25 17:12:41.076751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.464 ms 00:21:48.702 [2024-07-25 17:12:41.076762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.076817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:41.076836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.702 [2024-07-25 17:12:41.076855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:48.702 [2024-07-25 17:12:41.076867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.076998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.702 [2024-07-25 17:12:41.077036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.702 [2024-07-25 17:12:41.077053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:48.702 [2024-07-25 17:12:41.077067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.702 [2024-07-25 17:12:41.078534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3521.716 ms, result 0 00:21:48.702 { 00:21:48.702 "name": "ftl0", 00:21:48.702 "uuid": "477a5d34-c267-4410-bff5-fa36eeecfb93" 00:21:48.702 } 00:21:48.702 17:12:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:48.702 17:12:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:21:48.702 17:12:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:21:48.960 17:12:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:49.218 [2024-07-25 17:12:41.482543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:49.218 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:49.218 Zero copy mechanism will not be used. 00:21:49.218 Running I/O for 4 seconds... 00:21:53.403 00:21:53.403 Latency(us) 00:21:53.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.403 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:53.403 ftl0 : 4.00 1634.69 108.55 0.00 0.00 641.40 268.10 1966.08 00:21:53.403 =================================================================================================================== 00:21:53.403 Total : 1634.69 108.55 0.00 0.00 641.40 268.10 1966.08 00:21:53.403 [2024-07-25 17:12:45.493122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:53.403 0 00:21:53.403 17:12:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:53.403 [2024-07-25 17:12:45.627383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:53.403 Running I/O for 4 seconds... 00:21:57.584 00:21:57.584 Latency(us) 00:21:57.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.584 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:57.584 ftl0 : 4.02 7928.86 30.97 0.00 0.00 16100.69 314.65 24903.68 00:21:57.584 =================================================================================================================== 00:21:57.584 Total : 7928.86 30.97 0.00 0.00 16100.69 0.00 24903.68 00:21:57.584 0 00:21:57.584 [2024-07-25 17:12:49.655804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:57.584 17:12:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:57.584 [2024-07-25 17:12:49.805530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:57.584 Running I/O for 4 seconds... 00:22:01.827 00:22:01.827 Latency(us) 00:22:01.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.827 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:01.827 Verification LBA range: start 0x0 length 0x1400000 00:22:01.827 ftl0 : 4.01 5332.63 20.83 0.00 0.00 23916.02 361.19 25737.77 00:22:01.827 =================================================================================================================== 00:22:01.827 Total : 5332.63 20.83 0.00 0.00 23916.02 0.00 25737.77 00:22:01.827 0 00:22:01.827 [2024-07-25 17:12:53.835609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:01.827 17:12:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:01.827 [2024-07-25 17:12:54.108201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.827 [2024-07-25 17:12:54.108255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:01.827 [2024-07-25 17:12:54.108295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.827 [2024-07-25 17:12:54.108310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.827 [2024-07-25 17:12:54.108344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.827 [2024-07-25 17:12:54.111641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.827 [2024-07-25 17:12:54.111697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:01.827 [2024-07-25 17:12:54.111713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:22:01.827 [2024-07-25 17:12:54.111727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.827 [2024-07-25 17:12:54.113794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.827 [2024-07-25 17:12:54.113858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:01.827 [2024-07-25 17:12:54.113891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:22:01.827 [2024-07-25 17:12:54.113913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.827 [2024-07-25 17:12:54.284445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.827 [2024-07-25 17:12:54.284524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:01.827 [2024-07-25 17:12:54.284542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.510 ms 00:22:01.827 [2024-07-25 17:12:54.284560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.827 [2024-07-25 17:12:54.290047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.827 [2024-07-25 17:12:54.290103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:01.827 [2024-07-25 17:12:54.290118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.438 ms 00:22:01.827 [2024-07-25 17:12:54.290132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.316230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.316307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.087 [2024-07-25 17:12:54.316325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.033 ms 00:22:02.087 [2024-07-25 17:12:54.316340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.333495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.333556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.087 [2024-07-25 17:12:54.333577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.113 ms 00:22:02.087 [2024-07-25 17:12:54.333591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.333739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.333765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.087 [2024-07-25 17:12:54.333778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:02.087 [2024-07-25 17:12:54.333793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.359622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.359682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:02.087 [2024-07-25 17:12:54.359699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:22:02.087 [2024-07-25 17:12:54.359713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.385238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.385300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:02.087 [2024-07-25 17:12:54.385316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.485 ms 00:22:02.087 [2024-07-25 17:12:54.385329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.410534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.410593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.087 [2024-07-25 17:12:54.410610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.165 ms 00:22:02.087 [2024-07-25 17:12:54.410623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.435366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.087 [2024-07-25 17:12:54.435425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:02.087 [2024-07-25 17:12:54.435442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.634 ms 00:22:02.087 [2024-07-25 17:12:54.435457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.087 [2024-07-25 17:12:54.435497] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:02.087 [2024-07-25 17:12:54.435523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.435966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:02.087 [2024-07-25 17:12:54.436175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.436980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:02.088 [2024-07-25 17:12:54.437021] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:02.088 [2024-07-25 17:12:54.437033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 477a5d34-c267-4410-bff5-fa36eeecfb93 00:22:02.088 [2024-07-25 17:12:54.437048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:02.088 [2024-07-25 17:12:54.437060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:02.088 [2024-07-25 17:12:54.437085] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:02.088 [2024-07-25 17:12:54.437102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:02.088 [2024-07-25 17:12:54.437115] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:02.088 [2024-07-25 17:12:54.437127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:02.088 [2024-07-25 17:12:54.437141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:02.088 [2024-07-25 17:12:54.437152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:02.088 [2024-07-25 17:12:54.437168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:02.088 [2024-07-25 17:12:54.437179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.088 [2024-07-25 17:12:54.437194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:02.088 [2024-07-25 17:12:54.437207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:22:02.088 [2024-07-25 17:12:54.437222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.452439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.088 [2024-07-25 17:12:54.452503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:02.088 [2024-07-25 17:12:54.452520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.173 ms 00:22:02.088 [2024-07-25 17:12:54.452536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.452986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.088 [2024-07-25 17:12:54.453025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:02.088 [2024-07-25 17:12:54.453040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:22:02.088 [2024-07-25 17:12:54.453054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.489176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.088 [2024-07-25 17:12:54.489244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.088 [2024-07-25 17:12:54.489261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.088 [2024-07-25 17:12:54.489278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.489340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.088 [2024-07-25 17:12:54.489360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.088 [2024-07-25 17:12:54.489372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.088 [2024-07-25 17:12:54.489386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.489498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.088 [2024-07-25 17:12:54.489525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.088 [2024-07-25 17:12:54.489538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.088 [2024-07-25 17:12:54.489552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.088 [2024-07-25 17:12:54.489576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.088 [2024-07-25 17:12:54.489593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.088 [2024-07-25 17:12:54.489604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.088 [2024-07-25 17:12:54.489617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.347 [2024-07-25 17:12:54.576215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.347 [2024-07-25 17:12:54.576332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.347 [2024-07-25 17:12:54.576353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.347 [2024-07-25 17:12:54.576370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.347 [2024-07-25 17:12:54.648434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.347 [2024-07-25 17:12:54.648510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.348 [2024-07-25 17:12:54.648528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.648543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.648642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.648666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.348 [2024-07-25 17:12:54.648683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.648697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.648815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.648840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.348 [2024-07-25 17:12:54.648854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.648867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.648984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.649059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.348 [2024-07-25 17:12:54.649075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.649096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.649156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.649178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:02.348 [2024-07-25 17:12:54.649191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.649204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.649249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.649268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.348 [2024-07-25 17:12:54.649280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.649294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.649348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.348 [2024-07-25 17:12:54.649369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.348 [2024-07-25 17:12:54.649381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.348 [2024-07-25 17:12:54.649410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.348 [2024-07-25 17:12:54.649565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.330 ms, result 0 00:22:02.348 true 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79206 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 79206 ']' 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 79206 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79206 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79206' 00:22:02.348 killing process with pid 79206 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 79206 00:22:02.348 Received shutdown signal, test time was about 4.000000 seconds 00:22:02.348 00:22:02.348 Latency(us) 00:22:02.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.348 =================================================================================================================== 00:22:02.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.348 17:12:54 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 79206 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:06.549 Remove shared memory files 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:06.549 ************************************ 00:22:06.549 END TEST ftl_bdevperf 00:22:06.549 ************************************ 00:22:06.549 00:22:06.549 real 0m25.271s 00:22:06.549 user 0m28.273s 00:22:06.549 sys 0m1.252s 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.549 17:12:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:06.549 17:12:58 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:06.549 17:12:58 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:06.549 17:12:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.549 17:12:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:06.549 ************************************ 00:22:06.549 START TEST ftl_trim 00:22:06.549 ************************************ 00:22:06.549 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:06.549 * Looking for test storage... 00:22:06.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.549 17:12:58 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79568 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:06.550 17:12:58 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79568 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79568 ']' 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.550 17:12:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:06.550 [2024-07-25 17:12:58.734425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:06.550 [2024-07-25 17:12:58.734664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79568 ] 00:22:06.550 [2024-07-25 17:12:58.908554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:06.809 [2024-07-25 17:12:59.127024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.809 [2024-07-25 17:12:59.127147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.809 [2024-07-25 17:12:59.127154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.744 17:12:59 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.744 17:12:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:07.744 17:12:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:08.002 17:13:00 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:08.002 17:13:00 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:08.002 17:13:00 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:08.002 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:08.002 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:08.002 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:08.002 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:08.002 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:08.261 { 00:22:08.261 "name": "nvme0n1", 00:22:08.261 "aliases": [ 00:22:08.261 "392cb311-fd17-475a-ae77-0b56c5703884" 00:22:08.261 ], 00:22:08.261 "product_name": "NVMe disk", 00:22:08.261 "block_size": 4096, 00:22:08.261 "num_blocks": 1310720, 00:22:08.261 "uuid": "392cb311-fd17-475a-ae77-0b56c5703884", 00:22:08.261 "assigned_rate_limits": { 00:22:08.261 "rw_ios_per_sec": 0, 00:22:08.261 "rw_mbytes_per_sec": 0, 00:22:08.261 "r_mbytes_per_sec": 0, 00:22:08.261 "w_mbytes_per_sec": 0 00:22:08.261 }, 00:22:08.261 "claimed": true, 00:22:08.261 "claim_type": "read_many_write_one", 00:22:08.261 "zoned": false, 00:22:08.261 "supported_io_types": { 00:22:08.261 "read": true, 00:22:08.261 "write": true, 00:22:08.261 "unmap": true, 00:22:08.261 "flush": true, 00:22:08.261 "reset": true, 00:22:08.261 "nvme_admin": true, 00:22:08.261 "nvme_io": true, 00:22:08.261 "nvme_io_md": false, 00:22:08.261 "write_zeroes": true, 00:22:08.261 "zcopy": false, 00:22:08.261 "get_zone_info": false, 00:22:08.261 "zone_management": false, 00:22:08.261 "zone_append": false, 00:22:08.261 "compare": true, 00:22:08.261 "compare_and_write": false, 00:22:08.261 "abort": true, 00:22:08.261 "seek_hole": false, 00:22:08.261 "seek_data": false, 00:22:08.261 "copy": true, 00:22:08.261 "nvme_iov_md": false 00:22:08.261 }, 00:22:08.261 "driver_specific": { 00:22:08.261 "nvme": [ 00:22:08.261 { 00:22:08.261 "pci_address": "0000:00:11.0", 00:22:08.261 "trid": { 00:22:08.261 "trtype": "PCIe", 00:22:08.261 "traddr": "0000:00:11.0" 00:22:08.261 }, 00:22:08.261 "ctrlr_data": { 00:22:08.261 "cntlid": 0, 00:22:08.261 "vendor_id": "0x1b36", 00:22:08.261 "model_number": "QEMU NVMe Ctrl", 00:22:08.261 "serial_number": "12341", 00:22:08.261 "firmware_revision": "8.0.0", 00:22:08.261 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:08.261 "oacs": { 00:22:08.261 "security": 0, 00:22:08.261 "format": 1, 00:22:08.261 "firmware": 0, 00:22:08.261 "ns_manage": 1 00:22:08.261 }, 00:22:08.261 "multi_ctrlr": false, 00:22:08.261 "ana_reporting": false 00:22:08.261 }, 00:22:08.261 "vs": { 00:22:08.261 "nvme_version": "1.4" 00:22:08.261 }, 00:22:08.261 "ns_data": { 00:22:08.261 "id": 1, 00:22:08.261 "can_share": false 00:22:08.261 } 00:22:08.261 } 00:22:08.261 ], 00:22:08.261 "mp_policy": "active_passive" 00:22:08.261 } 00:22:08.261 } 00:22:08.261 ]' 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:08.261 17:13:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:22:08.261 17:13:00 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:08.261 17:13:00 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:08.261 17:13:00 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:08.261 17:13:00 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:08.261 17:13:00 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:08.520 17:13:00 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3c281ef3-b78a-4bd4-8564-e11e12056870 00:22:08.520 17:13:00 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:08.520 17:13:00 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c281ef3-b78a-4bd4-8564-e11e12056870 00:22:08.778 17:13:01 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=443b97c8-ffdf-44dc-9ed6-9ab33881ce65 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 443b97c8-ffdf-44dc-9ed6-9ab33881ce65 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:09.037 17:13:01 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.037 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.037 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:09.037 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:09.037 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:09.037 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.295 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:09.295 { 00:22:09.295 "name": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:09.295 "aliases": [ 00:22:09.295 "lvs/nvme0n1p0" 00:22:09.295 ], 00:22:09.295 "product_name": "Logical Volume", 00:22:09.295 "block_size": 4096, 00:22:09.295 "num_blocks": 26476544, 00:22:09.295 "uuid": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:09.295 "assigned_rate_limits": { 00:22:09.295 "rw_ios_per_sec": 0, 00:22:09.295 "rw_mbytes_per_sec": 0, 00:22:09.295 "r_mbytes_per_sec": 0, 00:22:09.295 "w_mbytes_per_sec": 0 00:22:09.295 }, 00:22:09.295 "claimed": false, 00:22:09.295 "zoned": false, 00:22:09.295 "supported_io_types": { 00:22:09.295 "read": true, 00:22:09.295 "write": true, 00:22:09.295 "unmap": true, 00:22:09.295 "flush": false, 00:22:09.295 "reset": true, 00:22:09.295 "nvme_admin": false, 00:22:09.295 "nvme_io": false, 00:22:09.295 "nvme_io_md": false, 00:22:09.295 "write_zeroes": true, 00:22:09.295 "zcopy": false, 00:22:09.295 "get_zone_info": false, 00:22:09.295 "zone_management": false, 00:22:09.295 "zone_append": false, 00:22:09.295 "compare": false, 00:22:09.295 "compare_and_write": false, 00:22:09.295 "abort": false, 00:22:09.295 "seek_hole": true, 00:22:09.295 "seek_data": true, 00:22:09.295 "copy": false, 00:22:09.295 "nvme_iov_md": false 00:22:09.295 }, 00:22:09.295 "driver_specific": { 00:22:09.295 "lvol": { 00:22:09.295 "lvol_store_uuid": "443b97c8-ffdf-44dc-9ed6-9ab33881ce65", 00:22:09.295 "base_bdev": "nvme0n1", 00:22:09.295 "thin_provision": true, 00:22:09.295 "num_allocated_clusters": 0, 00:22:09.295 "snapshot": false, 00:22:09.296 "clone": false, 00:22:09.296 "esnap_clone": false 00:22:09.296 } 00:22:09.296 } 00:22:09.296 } 00:22:09.296 ]' 00:22:09.296 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:09.296 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:09.296 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:09.554 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:09.554 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:09.554 17:13:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:09.554 17:13:01 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:09.554 17:13:01 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:09.554 17:13:01 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:09.812 17:13:02 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:09.812 17:13:02 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:09.812 17:13:02 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:09.812 { 00:22:09.812 "name": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:09.812 "aliases": [ 00:22:09.812 "lvs/nvme0n1p0" 00:22:09.812 ], 00:22:09.812 "product_name": "Logical Volume", 00:22:09.812 "block_size": 4096, 00:22:09.812 "num_blocks": 26476544, 00:22:09.812 "uuid": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:09.812 "assigned_rate_limits": { 00:22:09.812 "rw_ios_per_sec": 0, 00:22:09.812 "rw_mbytes_per_sec": 0, 00:22:09.812 "r_mbytes_per_sec": 0, 00:22:09.812 "w_mbytes_per_sec": 0 00:22:09.812 }, 00:22:09.812 "claimed": false, 00:22:09.812 "zoned": false, 00:22:09.812 "supported_io_types": { 00:22:09.812 "read": true, 00:22:09.812 "write": true, 00:22:09.812 "unmap": true, 00:22:09.812 "flush": false, 00:22:09.812 "reset": true, 00:22:09.812 "nvme_admin": false, 00:22:09.812 "nvme_io": false, 00:22:09.812 "nvme_io_md": false, 00:22:09.812 "write_zeroes": true, 00:22:09.812 "zcopy": false, 00:22:09.812 "get_zone_info": false, 00:22:09.812 "zone_management": false, 00:22:09.812 "zone_append": false, 00:22:09.812 "compare": false, 00:22:09.812 "compare_and_write": false, 00:22:09.812 "abort": false, 00:22:09.812 "seek_hole": true, 00:22:09.812 "seek_data": true, 00:22:09.812 "copy": false, 00:22:09.812 "nvme_iov_md": false 00:22:09.812 }, 00:22:09.812 "driver_specific": { 00:22:09.812 "lvol": { 00:22:09.812 "lvol_store_uuid": "443b97c8-ffdf-44dc-9ed6-9ab33881ce65", 00:22:09.812 "base_bdev": "nvme0n1", 00:22:09.812 "thin_provision": true, 00:22:09.812 "num_allocated_clusters": 0, 00:22:09.812 "snapshot": false, 00:22:09.812 "clone": false, 00:22:09.812 "esnap_clone": false 00:22:09.812 } 00:22:09.812 } 00:22:09.812 } 00:22:09.812 ]' 00:22:09.812 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:10.070 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:10.070 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:10.070 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:10.070 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:10.070 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:10.070 17:13:02 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:10.070 17:13:02 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:10.329 17:13:02 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:10.329 17:13:02 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:10.329 17:13:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:10.329 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:10.329 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:10.329 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:10.329 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:10.329 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:10.587 { 00:22:10.587 "name": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:10.587 "aliases": [ 00:22:10.587 "lvs/nvme0n1p0" 00:22:10.587 ], 00:22:10.587 "product_name": "Logical Volume", 00:22:10.587 "block_size": 4096, 00:22:10.587 "num_blocks": 26476544, 00:22:10.587 "uuid": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:10.587 "assigned_rate_limits": { 00:22:10.587 "rw_ios_per_sec": 0, 00:22:10.587 "rw_mbytes_per_sec": 0, 00:22:10.587 "r_mbytes_per_sec": 0, 00:22:10.587 "w_mbytes_per_sec": 0 00:22:10.587 }, 00:22:10.587 "claimed": false, 00:22:10.587 "zoned": false, 00:22:10.587 "supported_io_types": { 00:22:10.587 "read": true, 00:22:10.587 "write": true, 00:22:10.587 "unmap": true, 00:22:10.587 "flush": false, 00:22:10.587 "reset": true, 00:22:10.587 "nvme_admin": false, 00:22:10.587 "nvme_io": false, 00:22:10.587 "nvme_io_md": false, 00:22:10.587 "write_zeroes": true, 00:22:10.587 "zcopy": false, 00:22:10.587 "get_zone_info": false, 00:22:10.587 "zone_management": false, 00:22:10.587 "zone_append": false, 00:22:10.587 "compare": false, 00:22:10.587 "compare_and_write": false, 00:22:10.587 "abort": false, 00:22:10.587 "seek_hole": true, 00:22:10.587 "seek_data": true, 00:22:10.587 "copy": false, 00:22:10.587 "nvme_iov_md": false 00:22:10.587 }, 00:22:10.587 "driver_specific": { 00:22:10.587 "lvol": { 00:22:10.587 "lvol_store_uuid": "443b97c8-ffdf-44dc-9ed6-9ab33881ce65", 00:22:10.587 "base_bdev": "nvme0n1", 00:22:10.587 "thin_provision": true, 00:22:10.587 "num_allocated_clusters": 0, 00:22:10.587 "snapshot": false, 00:22:10.587 "clone": false, 00:22:10.587 "esnap_clone": false 00:22:10.587 } 00:22:10.587 } 00:22:10.587 } 00:22:10.587 ]' 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:10.587 17:13:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:10.587 17:13:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:10.587 17:13:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:10.847 [2024-07-25 17:13:03.178407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.178486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:10.847 [2024-07-25 17:13:03.178507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:10.847 [2024-07-25 17:13:03.178523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.182198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.182247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.847 [2024-07-25 17:13:03.182264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:22:10.847 [2024-07-25 17:13:03.182278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.182435] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:10.847 [2024-07-25 17:13:03.183508] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:10.847 [2024-07-25 17:13:03.183551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.183572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.847 [2024-07-25 17:13:03.183585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:22:10.847 [2024-07-25 17:13:03.183599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.183836] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:10.847 [2024-07-25 17:13:03.185749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.185805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:10.847 [2024-07-25 17:13:03.185825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:10.847 [2024-07-25 17:13:03.185837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.195871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.195939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.847 [2024-07-25 17:13:03.195958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.937 ms 00:22:10.847 [2024-07-25 17:13:03.195970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.196180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.196203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.847 [2024-07-25 17:13:03.196219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:10.847 [2024-07-25 17:13:03.196230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.196325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.196343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:10.847 [2024-07-25 17:13:03.196358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:10.847 [2024-07-25 17:13:03.196369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.196420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:10.847 [2024-07-25 17:13:03.201513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.201573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.847 [2024-07-25 17:13:03.201590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:22:10.847 [2024-07-25 17:13:03.201603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.201683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.201705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:10.847 [2024-07-25 17:13:03.201718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:10.847 [2024-07-25 17:13:03.201731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.201767] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:10.847 [2024-07-25 17:13:03.201969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:10.847 [2024-07-25 17:13:03.201989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:10.847 [2024-07-25 17:13:03.202029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:10.847 [2024-07-25 17:13:03.202049] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202065] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202081] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:10.847 [2024-07-25 17:13:03.202096] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:10.847 [2024-07-25 17:13:03.202107] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:10.847 [2024-07-25 17:13:03.202143] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:10.847 [2024-07-25 17:13:03.202157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.202171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:10.847 [2024-07-25 17:13:03.202184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:22:10.847 [2024-07-25 17:13:03.202198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.202318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.847 [2024-07-25 17:13:03.202336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:10.847 [2024-07-25 17:13:03.202349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:10.847 [2024-07-25 17:13:03.202365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.847 [2024-07-25 17:13:03.202492] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:10.847 [2024-07-25 17:13:03.202514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:10.847 [2024-07-25 17:13:03.202527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:10.847 [2024-07-25 17:13:03.202565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:10.847 [2024-07-25 17:13:03.202598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.847 [2024-07-25 17:13:03.202669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:10.847 [2024-07-25 17:13:03.202684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:10.847 [2024-07-25 17:13:03.202695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:10.847 [2024-07-25 17:13:03.202712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:10.847 [2024-07-25 17:13:03.202724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:10.847 [2024-07-25 17:13:03.202737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:10.847 [2024-07-25 17:13:03.202763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:10.847 [2024-07-25 17:13:03.202798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:10.847 [2024-07-25 17:13:03.202833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:10.847 [2024-07-25 17:13:03.202867] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:10.847 [2024-07-25 17:13:03.202903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:10.847 [2024-07-25 17:13:03.202926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:10.847 [2024-07-25 17:13:03.202937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:10.847 [2024-07-25 17:13:03.202952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.848 [2024-07-25 17:13:03.202962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:10.848 [2024-07-25 17:13:03.202999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:10.848 [2024-07-25 17:13:03.203044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:10.848 [2024-07-25 17:13:03.203057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:10.848 [2024-07-25 17:13:03.203067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:10.848 [2024-07-25 17:13:03.203082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.848 [2024-07-25 17:13:03.203092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:10.848 [2024-07-25 17:13:03.203111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:10.848 [2024-07-25 17:13:03.203121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.848 [2024-07-25 17:13:03.203133] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:10.848 [2024-07-25 17:13:03.203144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:10.848 [2024-07-25 17:13:03.203159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:10.848 [2024-07-25 17:13:03.203170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:10.848 [2024-07-25 17:13:03.203186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:10.848 [2024-07-25 17:13:03.203197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:10.848 [2024-07-25 17:13:03.203229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:10.848 [2024-07-25 17:13:03.203240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:10.848 [2024-07-25 17:13:03.203252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:10.848 [2024-07-25 17:13:03.203263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:10.848 [2024-07-25 17:13:03.203280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:10.848 [2024-07-25 17:13:03.203294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:10.848 [2024-07-25 17:13:03.203321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:10.848 [2024-07-25 17:13:03.203334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:10.848 [2024-07-25 17:13:03.203345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:10.848 [2024-07-25 17:13:03.203359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:10.848 [2024-07-25 17:13:03.203370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:10.848 [2024-07-25 17:13:03.203383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:10.848 [2024-07-25 17:13:03.203394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:10.848 [2024-07-25 17:13:03.203410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:10.848 [2024-07-25 17:13:03.203421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:10.848 [2024-07-25 17:13:03.203485] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:10.848 [2024-07-25 17:13:03.203497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:10.848 [2024-07-25 17:13:03.203524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:10.848 [2024-07-25 17:13:03.203537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:10.848 [2024-07-25 17:13:03.203549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:10.848 [2024-07-25 17:13:03.203574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:10.848 [2024-07-25 17:13:03.203585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:10.848 [2024-07-25 17:13:03.203601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:22:10.848 [2024-07-25 17:13:03.203628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.848 [2024-07-25 17:13:03.203729] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:10.848 [2024-07-25 17:13:03.203762] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:14.129 [2024-07-25 17:13:06.136700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.129 [2024-07-25 17:13:06.136799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:14.129 [2024-07-25 17:13:06.136840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2932.980 ms 00:22:14.129 [2024-07-25 17:13:06.136853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.129 [2024-07-25 17:13:06.173925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.129 [2024-07-25 17:13:06.174024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.130 [2024-07-25 17:13:06.174049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.725 ms 00:22:14.130 [2024-07-25 17:13:06.174061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.174258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.174284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:14.130 [2024-07-25 17:13:06.174316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:14.130 [2024-07-25 17:13:06.174327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.224761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.224830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.130 [2024-07-25 17:13:06.224868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.388 ms 00:22:14.130 [2024-07-25 17:13:06.224880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.225057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.225079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.130 [2024-07-25 17:13:06.225094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:14.130 [2024-07-25 17:13:06.225106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.225894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.225927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.130 [2024-07-25 17:13:06.225956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:22:14.130 [2024-07-25 17:13:06.225968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.226167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.226185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.130 [2024-07-25 17:13:06.226200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:14.130 [2024-07-25 17:13:06.226211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.249403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.249462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.130 [2024-07-25 17:13:06.249499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.150 ms 00:22:14.130 [2024-07-25 17:13:06.249510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.263471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:14.130 [2024-07-25 17:13:06.290519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.290640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.130 [2024-07-25 17:13:06.290662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.879 ms 00:22:14.130 [2024-07-25 17:13:06.290678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.377695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.377806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:14.130 [2024-07-25 17:13:06.377829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.884 ms 00:22:14.130 [2024-07-25 17:13:06.377843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.378118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.378161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.130 [2024-07-25 17:13:06.378176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:22:14.130 [2024-07-25 17:13:06.378193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.406207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.406286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:14.130 [2024-07-25 17:13:06.406306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.971 ms 00:22:14.130 [2024-07-25 17:13:06.406320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.433383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.433475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:14.130 [2024-07-25 17:13:06.433495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.985 ms 00:22:14.130 [2024-07-25 17:13:06.433508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.434498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.434551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.130 [2024-07-25 17:13:06.434583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:22:14.130 [2024-07-25 17:13:06.434596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.523969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.524078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:14.130 [2024-07-25 17:13:06.524100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.323 ms 00:22:14.130 [2024-07-25 17:13:06.524118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.554774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.554840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:14.130 [2024-07-25 17:13:06.554861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.576 ms 00:22:14.130 [2024-07-25 17:13:06.554875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.130 [2024-07-25 17:13:06.586135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.130 [2024-07-25 17:13:06.586216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:14.130 [2024-07-25 17:13:06.586235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.185 ms 00:22:14.130 [2024-07-25 17:13:06.586250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.389 [2024-07-25 17:13:06.614830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.389 [2024-07-25 17:13:06.614897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.389 [2024-07-25 17:13:06.614914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.501 ms 00:22:14.389 [2024-07-25 17:13:06.614928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.389 [2024-07-25 17:13:06.615020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.389 [2024-07-25 17:13:06.615044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.389 [2024-07-25 17:13:06.615057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:14.389 [2024-07-25 17:13:06.615073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.389 [2024-07-25 17:13:06.615181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.389 [2024-07-25 17:13:06.615201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.389 [2024-07-25 17:13:06.615213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:14.389 [2024-07-25 17:13:06.615249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.389 [2024-07-25 17:13:06.616662] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.389 [2024-07-25 17:13:06.620430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3437.868 ms, result 0 00:22:14.389 [2024-07-25 17:13:06.621444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.389 { 00:22:14.389 "name": "ftl0", 00:22:14.389 "uuid": "389a3003-122c-466b-a4fe-a4bfdc3017fc" 00:22:14.389 } 00:22:14.389 17:13:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:14.389 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:14.648 17:13:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:14.907 [ 00:22:14.907 { 00:22:14.907 "name": "ftl0", 00:22:14.907 "aliases": [ 00:22:14.907 "389a3003-122c-466b-a4fe-a4bfdc3017fc" 00:22:14.907 ], 00:22:14.907 "product_name": "FTL disk", 00:22:14.907 "block_size": 4096, 00:22:14.907 "num_blocks": 23592960, 00:22:14.907 "uuid": "389a3003-122c-466b-a4fe-a4bfdc3017fc", 00:22:14.907 "assigned_rate_limits": { 00:22:14.907 "rw_ios_per_sec": 0, 00:22:14.907 "rw_mbytes_per_sec": 0, 00:22:14.907 "r_mbytes_per_sec": 0, 00:22:14.907 "w_mbytes_per_sec": 0 00:22:14.907 }, 00:22:14.907 "claimed": false, 00:22:14.907 "zoned": false, 00:22:14.907 "supported_io_types": { 00:22:14.907 "read": true, 00:22:14.907 "write": true, 00:22:14.907 "unmap": true, 00:22:14.907 "flush": true, 00:22:14.907 "reset": false, 00:22:14.907 "nvme_admin": false, 00:22:14.907 "nvme_io": false, 00:22:14.907 "nvme_io_md": false, 00:22:14.907 "write_zeroes": true, 00:22:14.907 "zcopy": false, 00:22:14.907 "get_zone_info": false, 00:22:14.907 "zone_management": false, 00:22:14.907 "zone_append": false, 00:22:14.907 "compare": false, 00:22:14.907 "compare_and_write": false, 00:22:14.907 "abort": false, 00:22:14.907 "seek_hole": false, 00:22:14.907 "seek_data": false, 00:22:14.907 "copy": false, 00:22:14.907 "nvme_iov_md": false 00:22:14.907 }, 00:22:14.907 "driver_specific": { 00:22:14.907 "ftl": { 00:22:14.907 "base_bdev": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:14.907 "cache": "nvc0n1p0" 00:22:14.907 } 00:22:14.907 } 00:22:14.907 } 00:22:14.907 ] 00:22:14.907 17:13:07 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:22:14.907 17:13:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:14.907 17:13:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:15.165 17:13:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:15.165 17:13:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:15.424 17:13:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:15.424 { 00:22:15.424 "name": "ftl0", 00:22:15.424 "aliases": [ 00:22:15.424 "389a3003-122c-466b-a4fe-a4bfdc3017fc" 00:22:15.424 ], 00:22:15.424 "product_name": "FTL disk", 00:22:15.424 "block_size": 4096, 00:22:15.424 "num_blocks": 23592960, 00:22:15.424 "uuid": "389a3003-122c-466b-a4fe-a4bfdc3017fc", 00:22:15.424 "assigned_rate_limits": { 00:22:15.424 "rw_ios_per_sec": 0, 00:22:15.424 "rw_mbytes_per_sec": 0, 00:22:15.424 "r_mbytes_per_sec": 0, 00:22:15.424 "w_mbytes_per_sec": 0 00:22:15.424 }, 00:22:15.424 "claimed": false, 00:22:15.424 "zoned": false, 00:22:15.424 "supported_io_types": { 00:22:15.424 "read": true, 00:22:15.424 "write": true, 00:22:15.424 "unmap": true, 00:22:15.424 "flush": true, 00:22:15.424 "reset": false, 00:22:15.424 "nvme_admin": false, 00:22:15.424 "nvme_io": false, 00:22:15.424 "nvme_io_md": false, 00:22:15.424 "write_zeroes": true, 00:22:15.424 "zcopy": false, 00:22:15.424 "get_zone_info": false, 00:22:15.424 "zone_management": false, 00:22:15.424 "zone_append": false, 00:22:15.424 "compare": false, 00:22:15.424 "compare_and_write": false, 00:22:15.424 "abort": false, 00:22:15.424 "seek_hole": false, 00:22:15.424 "seek_data": false, 00:22:15.424 "copy": false, 00:22:15.424 "nvme_iov_md": false 00:22:15.424 }, 00:22:15.424 "driver_specific": { 00:22:15.424 "ftl": { 00:22:15.424 "base_bdev": "8ad6ddb8-4d44-4c8a-b0e1-7f93b27f9edb", 00:22:15.424 "cache": "nvc0n1p0" 00:22:15.424 } 00:22:15.424 } 00:22:15.424 } 00:22:15.424 ]' 00:22:15.424 17:13:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:15.424 17:13:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:15.424 17:13:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:15.683 [2024-07-25 17:13:07.899769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.899839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.683 [2024-07-25 17:13:07.899881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:15.683 [2024-07-25 17:13:07.899893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.899943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:15.683 [2024-07-25 17:13:07.903562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.903598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.683 [2024-07-25 17:13:07.903630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:22:15.683 [2024-07-25 17:13:07.903646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.904208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.904240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.683 [2024-07-25 17:13:07.904270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:22:15.683 [2024-07-25 17:13:07.904303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.908003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.908067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.683 [2024-07-25 17:13:07.908083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:22:15.683 [2024-07-25 17:13:07.908096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.914613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.914675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:15.683 [2024-07-25 17:13:07.914706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.448 ms 00:22:15.683 [2024-07-25 17:13:07.914719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.943504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.943569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.683 [2024-07-25 17:13:07.943587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.687 ms 00:22:15.683 [2024-07-25 17:13:07.943603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.961388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.961437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.683 [2024-07-25 17:13:07.961474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.695 ms 00:22:15.683 [2024-07-25 17:13:07.961488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.961711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.961737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.683 [2024-07-25 17:13:07.961749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:22:15.683 [2024-07-25 17:13:07.961762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:07.989322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:07.989396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:15.683 [2024-07-25 17:13:07.989412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.527 ms 00:22:15.683 [2024-07-25 17:13:07.989424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:08.016815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:08.016879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:15.683 [2024-07-25 17:13:08.016897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.299 ms 00:22:15.683 [2024-07-25 17:13:08.016912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:08.045586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:08.045647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:15.683 [2024-07-25 17:13:08.045664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.546 ms 00:22:15.683 [2024-07-25 17:13:08.045677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:08.073227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.683 [2024-07-25 17:13:08.073291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:15.683 [2024-07-25 17:13:08.073308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.402 ms 00:22:15.683 [2024-07-25 17:13:08.073320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.683 [2024-07-25 17:13:08.073432] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:15.683 [2024-07-25 17:13:08.073462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:15.683 [2024-07-25 17:13:08.073898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.073910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.073923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.073934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.073958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.073969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:15.684 [2024-07-25 17:13:08.074840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:15.684 [2024-07-25 17:13:08.074852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:15.684 [2024-07-25 17:13:08.074869] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:15.684 [2024-07-25 17:13:08.074883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:15.684 [2024-07-25 17:13:08.074896] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:15.684 [2024-07-25 17:13:08.074907] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:15.684 [2024-07-25 17:13:08.074920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:15.684 [2024-07-25 17:13:08.074932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:15.684 [2024-07-25 17:13:08.074954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:15.684 [2024-07-25 17:13:08.074980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:15.684 [2024-07-25 17:13:08.074991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:15.684 [2024-07-25 17:13:08.075002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.684 [2024-07-25 17:13:08.075027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:15.684 [2024-07-25 17:13:08.075040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:22:15.684 [2024-07-25 17:13:08.075053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.684 [2024-07-25 17:13:08.091095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.684 [2024-07-25 17:13:08.091155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:15.684 [2024-07-25 17:13:08.091172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.004 ms 00:22:15.684 [2024-07-25 17:13:08.091187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.684 [2024-07-25 17:13:08.091691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.684 [2024-07-25 17:13:08.091719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:15.684 [2024-07-25 17:13:08.091733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:22:15.684 [2024-07-25 17:13:08.091745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.149096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.149167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:15.943 [2024-07-25 17:13:08.149185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.149199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.149351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.149373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:15.943 [2024-07-25 17:13:08.149386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.149400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.149483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.149506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:15.943 [2024-07-25 17:13:08.149518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.149534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.149572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.149588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:15.943 [2024-07-25 17:13:08.149599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.149612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.247974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.248075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:15.943 [2024-07-25 17:13:08.248095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.248109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.324500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.324593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:15.943 [2024-07-25 17:13:08.324612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.324626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.324783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.324810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:15.943 [2024-07-25 17:13:08.324823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.324839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.324907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.324923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:15.943 [2024-07-25 17:13:08.324935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.324947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.325115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.325141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:15.943 [2024-07-25 17:13:08.325175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.325189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.325259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.325298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:15.943 [2024-07-25 17:13:08.325311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.325327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.325391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.325410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:15.943 [2024-07-25 17:13:08.325426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.325441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.325512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:15.943 [2024-07-25 17:13:08.325534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:15.943 [2024-07-25 17:13:08.325546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:15.943 [2024-07-25 17:13:08.325559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.943 [2024-07-25 17:13:08.325807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.024 ms, result 0 00:22:15.943 true 00:22:15.943 17:13:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79568 00:22:15.943 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79568 ']' 00:22:15.943 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79568 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79568 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:15.944 killing process with pid 79568 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79568' 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79568 00:22:15.944 17:13:08 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79568 00:22:21.209 17:13:12 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:21.775 65536+0 records in 00:22:21.775 65536+0 records out 00:22:21.775 268435456 bytes (268 MB, 256 MiB) copied, 1.06606 s, 252 MB/s 00:22:21.775 17:13:14 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:21.775 [2024-07-25 17:13:14.083169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:21.775 [2024-07-25 17:13:14.083310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79758 ] 00:22:22.034 [2024-07-25 17:13:14.245487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.034 [2024-07-25 17:13:14.490972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.599 [2024-07-25 17:13:14.799135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:22.599 [2024-07-25 17:13:14.799235] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:22.599 [2024-07-25 17:13:14.960505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.960571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:22.599 [2024-07-25 17:13:14.960607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:22.599 [2024-07-25 17:13:14.960618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.963851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.963895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.599 [2024-07-25 17:13:14.963927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:22:22.599 [2024-07-25 17:13:14.963937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.964154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:22.599 [2024-07-25 17:13:14.965153] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:22.599 [2024-07-25 17:13:14.965192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.965223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.599 [2024-07-25 17:13:14.965235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:22:22.599 [2024-07-25 17:13:14.965246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.967431] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:22.599 [2024-07-25 17:13:14.982242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.982284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:22.599 [2024-07-25 17:13:14.982324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.812 ms 00:22:22.599 [2024-07-25 17:13:14.982334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.982443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.982464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:22.599 [2024-07-25 17:13:14.982476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:22.599 [2024-07-25 17:13:14.982486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.991341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.991381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.599 [2024-07-25 17:13:14.991414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:22:22.599 [2024-07-25 17:13:14.991425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.991537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.991557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.599 [2024-07-25 17:13:14.991569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:22.599 [2024-07-25 17:13:14.991580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.991619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.991633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:22.599 [2024-07-25 17:13:14.991648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:22.599 [2024-07-25 17:13:14.991659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.991688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:22.599 [2024-07-25 17:13:14.996330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.996366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.599 [2024-07-25 17:13:14.996413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:22:22.599 [2024-07-25 17:13:14.996423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.996506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.996524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:22.599 [2024-07-25 17:13:14.996536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:22.599 [2024-07-25 17:13:14.996546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.996574] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:22.599 [2024-07-25 17:13:14.996603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:22.599 [2024-07-25 17:13:14.996644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:22.599 [2024-07-25 17:13:14.996662] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:22.599 [2024-07-25 17:13:14.996752] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:22.599 [2024-07-25 17:13:14.996769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:22.599 [2024-07-25 17:13:14.996782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:22.599 [2024-07-25 17:13:14.996795] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:22.599 [2024-07-25 17:13:14.996807] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:22.599 [2024-07-25 17:13:14.996823] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:22.599 [2024-07-25 17:13:14.996833] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:22.599 [2024-07-25 17:13:14.996843] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:22.599 [2024-07-25 17:13:14.996852] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:22.599 [2024-07-25 17:13:14.996863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.996873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:22.599 [2024-07-25 17:13:14.996883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:22.599 [2024-07-25 17:13:14.996893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.996975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.996988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:22.599 [2024-07-25 17:13:14.997004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:22.599 [2024-07-25 17:13:14.997059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:14.997176] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:22.599 [2024-07-25 17:13:14.997192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:22.599 [2024-07-25 17:13:14.997205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:22.599 [2024-07-25 17:13:14.997236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:22.599 [2024-07-25 17:13:14.997269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:22.599 [2024-07-25 17:13:14.997288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:22.599 [2024-07-25 17:13:14.997297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:22.599 [2024-07-25 17:13:14.997306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:22.599 [2024-07-25 17:13:14.997316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:22.599 [2024-07-25 17:13:14.997342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:22.599 [2024-07-25 17:13:14.997370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:22.599 [2024-07-25 17:13:14.997391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:22.599 [2024-07-25 17:13:14.997452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:22.599 [2024-07-25 17:13:14.997482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:22.599 [2024-07-25 17:13:14.997511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:22.599 [2024-07-25 17:13:14.997541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:22.599 [2024-07-25 17:13:14.997569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:22.599 [2024-07-25 17:13:14.997589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:22.599 [2024-07-25 17:13:14.997599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:22.599 [2024-07-25 17:13:14.997609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:22.599 [2024-07-25 17:13:14.997618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:22.599 [2024-07-25 17:13:14.997628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:22.599 [2024-07-25 17:13:14.997637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:22.599 [2024-07-25 17:13:14.997657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:22.599 [2024-07-25 17:13:14.997666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997675] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:22.599 [2024-07-25 17:13:14.997686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:22.599 [2024-07-25 17:13:14.997697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:22.599 [2024-07-25 17:13:14.997724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:22.599 [2024-07-25 17:13:14.997734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:22.599 [2024-07-25 17:13:14.997745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:22.599 [2024-07-25 17:13:14.997755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:22.599 [2024-07-25 17:13:14.997765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:22.599 [2024-07-25 17:13:14.997775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:22.599 [2024-07-25 17:13:14.997787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:22.599 [2024-07-25 17:13:14.997800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:22.599 [2024-07-25 17:13:14.997823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:22.599 [2024-07-25 17:13:14.997834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:22.599 [2024-07-25 17:13:14.997844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:22.599 [2024-07-25 17:13:14.997855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:22.599 [2024-07-25 17:13:14.997865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:22.599 [2024-07-25 17:13:14.997876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:22.599 [2024-07-25 17:13:14.997886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:22.599 [2024-07-25 17:13:14.997897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:22.599 [2024-07-25 17:13:14.997907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:22.599 [2024-07-25 17:13:14.997960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:22.599 [2024-07-25 17:13:14.997971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:22.599 [2024-07-25 17:13:14.997994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:22.599 [2024-07-25 17:13:14.998005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:22.599 [2024-07-25 17:13:14.998015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:22.599 [2024-07-25 17:13:14.998027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:14.998038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:22.599 [2024-07-25 17:13:14.998049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:22:22.599 [2024-07-25 17:13:14.998077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:15.045655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:15.045709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.599 [2024-07-25 17:13:15.045752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.501 ms 00:22:22.599 [2024-07-25 17:13:15.045764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.599 [2024-07-25 17:13:15.045956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.599 [2024-07-25 17:13:15.045976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:22.599 [2024-07-25 17:13:15.046068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:22.599 [2024-07-25 17:13:15.046083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.087400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.856 [2024-07-25 17:13:15.087448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.856 [2024-07-25 17:13:15.087482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.281 ms 00:22:22.856 [2024-07-25 17:13:15.087493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.087638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.856 [2024-07-25 17:13:15.087658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.856 [2024-07-25 17:13:15.087670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:22.856 [2024-07-25 17:13:15.087680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.088359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.856 [2024-07-25 17:13:15.088386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.856 [2024-07-25 17:13:15.088409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:22:22.856 [2024-07-25 17:13:15.088420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.088610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.856 [2024-07-25 17:13:15.088629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.856 [2024-07-25 17:13:15.088641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:22:22.856 [2024-07-25 17:13:15.088667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.105925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.856 [2024-07-25 17:13:15.105967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.856 [2024-07-25 17:13:15.106031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.231 ms 00:22:22.856 [2024-07-25 17:13:15.106043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.856 [2024-07-25 17:13:15.120782] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:22.856 [2024-07-25 17:13:15.120845] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:22.857 [2024-07-25 17:13:15.120880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.120892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:22.857 [2024-07-25 17:13:15.120903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.663 ms 00:22:22.857 [2024-07-25 17:13:15.120914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.146168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.146217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:22.857 [2024-07-25 17:13:15.146251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:22:22.857 [2024-07-25 17:13:15.146262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.160040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.160094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:22.857 [2024-07-25 17:13:15.160127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:22:22.857 [2024-07-25 17:13:15.160138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.173355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.173395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:22.857 [2024-07-25 17:13:15.173427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.108 ms 00:22:22.857 [2024-07-25 17:13:15.173437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.174321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.174361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:22.857 [2024-07-25 17:13:15.174377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:22:22.857 [2024-07-25 17:13:15.174388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.243873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.243953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:22.857 [2024-07-25 17:13:15.244029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.428 ms 00:22:22.857 [2024-07-25 17:13:15.244046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.254801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:22.857 [2024-07-25 17:13:15.273447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.273502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:22.857 [2024-07-25 17:13:15.273537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.249 ms 00:22:22.857 [2024-07-25 17:13:15.273548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.273667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.273686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:22.857 [2024-07-25 17:13:15.273703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:22.857 [2024-07-25 17:13:15.273713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.273783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.273798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:22.857 [2024-07-25 17:13:15.273808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:22.857 [2024-07-25 17:13:15.273819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.273850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.273863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:22.857 [2024-07-25 17:13:15.273886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:22.857 [2024-07-25 17:13:15.273900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.273937] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:22.857 [2024-07-25 17:13:15.273953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.273963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:22.857 [2024-07-25 17:13:15.273974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:22.857 [2024-07-25 17:13:15.273984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.301035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.301076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:22.857 [2024-07-25 17:13:15.301115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.962 ms 00:22:22.857 [2024-07-25 17:13:15.301126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.301228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.857 [2024-07-25 17:13:15.301247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:22.857 [2024-07-25 17:13:15.301259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:22.857 [2024-07-25 17:13:15.301269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.857 [2024-07-25 17:13:15.302780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:22.857 [2024-07-25 17:13:15.306356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.806 ms, result 0 00:22:22.857 [2024-07-25 17:13:15.307400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.857 [2024-07-25 17:13:15.321509] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:34.666  Copying: 22/256 [MB] (22 MBps) Copying: 43/256 [MB] (21 MBps) Copying: 65/256 [MB] (21 MBps) Copying: 86/256 [MB] (21 MBps) Copying: 108/256 [MB] (21 MBps) Copying: 130/256 [MB] (21 MBps) Copying: 152/256 [MB] (22 MBps) Copying: 174/256 [MB] (22 MBps) Copying: 197/256 [MB] (22 MBps) Copying: 220/256 [MB] (22 MBps) Copying: 242/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-25 17:13:26.887249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.666 [2024-07-25 17:13:26.898670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.898711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.666 [2024-07-25 17:13:26.898748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:34.666 [2024-07-25 17:13:26.898759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.898788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:34.666 [2024-07-25 17:13:26.902144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.902180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.666 [2024-07-25 17:13:26.902210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:22:34.666 [2024-07-25 17:13:26.902220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.904156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.904195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:34.666 [2024-07-25 17:13:26.904238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:22:34.666 [2024-07-25 17:13:26.904248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.911790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.911831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:34.666 [2024-07-25 17:13:26.911863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:22:34.666 [2024-07-25 17:13:26.911881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.918202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.918236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:34.666 [2024-07-25 17:13:26.918266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:22:34.666 [2024-07-25 17:13:26.918275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.944822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.944864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:34.666 [2024-07-25 17:13:26.944897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.500 ms 00:22:34.666 [2024-07-25 17:13:26.944908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.961632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.961673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:34.666 [2024-07-25 17:13:26.961705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.665 ms 00:22:34.666 [2024-07-25 17:13:26.961716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.961870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.961890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.666 [2024-07-25 17:13:26.961903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:34.666 [2024-07-25 17:13:26.961913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:26.989356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:26.989412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:34.666 [2024-07-25 17:13:26.989444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.421 ms 00:22:34.666 [2024-07-25 17:13:26.989453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:27.015907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:27.015946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:34.666 [2024-07-25 17:13:27.015977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.394 ms 00:22:34.666 [2024-07-25 17:13:27.015987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:27.042457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:27.042498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.666 [2024-07-25 17:13:27.042530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.379 ms 00:22:34.666 [2024-07-25 17:13:27.042540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:27.068904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.666 [2024-07-25 17:13:27.068944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.666 [2024-07-25 17:13:27.068975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.277 ms 00:22:34.666 [2024-07-25 17:13:27.068985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.666 [2024-07-25 17:13:27.069076] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.666 [2024-07-25 17:13:27.069122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.666 [2024-07-25 17:13:27.069601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.069991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.667 [2024-07-25 17:13:27.070338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.667 [2024-07-25 17:13:27.070349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:34.667 [2024-07-25 17:13:27.070362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:34.667 [2024-07-25 17:13:27.070372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:34.667 [2024-07-25 17:13:27.070383] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:34.667 [2024-07-25 17:13:27.070407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:34.667 [2024-07-25 17:13:27.070419] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.667 [2024-07-25 17:13:27.070430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.667 [2024-07-25 17:13:27.070441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.667 [2024-07-25 17:13:27.070450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.667 [2024-07-25 17:13:27.070460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.667 [2024-07-25 17:13:27.070470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.667 [2024-07-25 17:13:27.070481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.667 [2024-07-25 17:13:27.070492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:22:34.667 [2024-07-25 17:13:27.070508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.667 [2024-07-25 17:13:27.085608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.667 [2024-07-25 17:13:27.085646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.667 [2024-07-25 17:13:27.085678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.075 ms 00:22:34.667 [2024-07-25 17:13:27.085688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.667 [2024-07-25 17:13:27.086208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.667 [2024-07-25 17:13:27.086235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.667 [2024-07-25 17:13:27.086257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:34.667 [2024-07-25 17:13:27.086268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.667 [2024-07-25 17:13:27.122590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.667 [2024-07-25 17:13:27.122655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.667 [2024-07-25 17:13:27.122671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.668 [2024-07-25 17:13:27.122682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.668 [2024-07-25 17:13:27.122786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.668 [2024-07-25 17:13:27.122804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.668 [2024-07-25 17:13:27.122821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.668 [2024-07-25 17:13:27.122831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.668 [2024-07-25 17:13:27.122885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.668 [2024-07-25 17:13:27.122903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.668 [2024-07-25 17:13:27.122914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.668 [2024-07-25 17:13:27.122924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.668 [2024-07-25 17:13:27.122948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.668 [2024-07-25 17:13:27.122977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.668 [2024-07-25 17:13:27.122988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.668 [2024-07-25 17:13:27.123004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.208266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.208327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.926 [2024-07-25 17:13:27.208362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.208372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.281638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.281696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.926 [2024-07-25 17:13:27.281736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.281747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.281853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.281870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.926 [2024-07-25 17:13:27.281881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.281892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.281927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.281940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.926 [2024-07-25 17:13:27.281951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.281962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.282147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.282167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.926 [2024-07-25 17:13:27.282180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.282191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.282250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.282268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:34.926 [2024-07-25 17:13:27.282280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.282290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.282342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.282358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.926 [2024-07-25 17:13:27.282370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.926 [2024-07-25 17:13:27.282381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.926 [2024-07-25 17:13:27.282465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.926 [2024-07-25 17:13:27.282481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.927 [2024-07-25 17:13:27.282492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.927 [2024-07-25 17:13:27.282502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.927 [2024-07-25 17:13:27.282719] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.043 ms, result 0 00:22:36.301 00:22:36.301 00:22:36.301 17:13:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79905 00:22:36.301 17:13:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:36.301 17:13:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79905 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79905 ']' 00:22:36.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.301 17:13:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:36.301 [2024-07-25 17:13:28.548806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:36.301 [2024-07-25 17:13:28.549048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79905 ] 00:22:36.301 [2024-07-25 17:13:28.716968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.560 [2024-07-25 17:13:28.911643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.495 17:13:29 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.495 17:13:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:37.495 17:13:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:37.495 [2024-07-25 17:13:29.851527] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:37.495 [2024-07-25 17:13:29.851618] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:37.754 [2024-07-25 17:13:30.034936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.035109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:37.754 [2024-07-25 17:13:30.035133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:37.754 [2024-07-25 17:13:30.035150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.039243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.039344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.754 [2024-07-25 17:13:30.039363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:22:37.754 [2024-07-25 17:13:30.039381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.039559] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:37.754 [2024-07-25 17:13:30.040557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:37.754 [2024-07-25 17:13:30.040597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.040635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.754 [2024-07-25 17:13:30.040649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:22:37.754 [2024-07-25 17:13:30.040687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.043227] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:37.754 [2024-07-25 17:13:30.060323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.060370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:37.754 [2024-07-25 17:13:30.060416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.084 ms 00:22:37.754 [2024-07-25 17:13:30.060429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.060568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.060603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:37.754 [2024-07-25 17:13:30.060622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:37.754 [2024-07-25 17:13:30.060635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.069966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.070025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.754 [2024-07-25 17:13:30.070073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.229 ms 00:22:37.754 [2024-07-25 17:13:30.070086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.070266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.070289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.754 [2024-07-25 17:13:30.070308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:37.754 [2024-07-25 17:13:30.070327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.070379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.070396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:37.754 [2024-07-25 17:13:30.070413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:37.754 [2024-07-25 17:13:30.070425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.070471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:37.754 [2024-07-25 17:13:30.075872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.076116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.754 [2024-07-25 17:13:30.076252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.419 ms 00:22:37.754 [2024-07-25 17:13:30.076318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.076546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.076682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:37.754 [2024-07-25 17:13:30.076822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:37.754 [2024-07-25 17:13:30.076958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.077116] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:37.754 [2024-07-25 17:13:30.077275] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:37.754 [2024-07-25 17:13:30.077469] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:37.754 [2024-07-25 17:13:30.077508] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:37.754 [2024-07-25 17:13:30.077609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:37.754 [2024-07-25 17:13:30.077643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:37.754 [2024-07-25 17:13:30.077659] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:37.754 [2024-07-25 17:13:30.077681] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:37.754 [2024-07-25 17:13:30.077696] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:37.754 [2024-07-25 17:13:30.077713] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:37.754 [2024-07-25 17:13:30.077725] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:37.754 [2024-07-25 17:13:30.077742] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:37.754 [2024-07-25 17:13:30.077753] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:37.754 [2024-07-25 17:13:30.077776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.077789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:37.754 [2024-07-25 17:13:30.077806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:22:37.754 [2024-07-25 17:13:30.077823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.077924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.754 [2024-07-25 17:13:30.077941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:37.754 [2024-07-25 17:13:30.077959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:37.754 [2024-07-25 17:13:30.077971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.754 [2024-07-25 17:13:30.078142] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:37.754 [2024-07-25 17:13:30.078165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:37.754 [2024-07-25 17:13:30.078184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.754 [2024-07-25 17:13:30.078209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.754 [2024-07-25 17:13:30.078237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:37.754 [2024-07-25 17:13:30.078248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:37.754 [2024-07-25 17:13:30.078264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:37.754 [2024-07-25 17:13:30.078276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:37.754 [2024-07-25 17:13:30.078297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:37.754 [2024-07-25 17:13:30.078308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.754 [2024-07-25 17:13:30.078323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:37.754 [2024-07-25 17:13:30.078335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:37.755 [2024-07-25 17:13:30.078365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.755 [2024-07-25 17:13:30.078376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:37.755 [2024-07-25 17:13:30.078392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:37.755 [2024-07-25 17:13:30.078403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:37.755 [2024-07-25 17:13:30.078429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:37.755 [2024-07-25 17:13:30.078470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:37.755 [2024-07-25 17:13:30.078506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:37.755 [2024-07-25 17:13:30.078551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:37.755 [2024-07-25 17:13:30.078605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:37.755 [2024-07-25 17:13:30.078691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.755 [2024-07-25 17:13:30.078720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:37.755 [2024-07-25 17:13:30.078732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:37.755 [2024-07-25 17:13:30.078749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.755 [2024-07-25 17:13:30.078760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:37.755 [2024-07-25 17:13:30.078776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:37.755 [2024-07-25 17:13:30.078788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:37.755 [2024-07-25 17:13:30.078821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:37.755 [2024-07-25 17:13:30.078836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078848] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:37.755 [2024-07-25 17:13:30.078865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:37.755 [2024-07-25 17:13:30.078877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.755 [2024-07-25 17:13:30.078901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.755 [2024-07-25 17:13:30.078914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:37.755 [2024-07-25 17:13:30.078930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:37.755 [2024-07-25 17:13:30.078943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:37.755 [2024-07-25 17:13:30.078959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:37.755 [2024-07-25 17:13:30.078985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:37.755 [2024-07-25 17:13:30.079272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:37.755 [2024-07-25 17:13:30.079400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:37.755 [2024-07-25 17:13:30.079474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.079537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:37.755 [2024-07-25 17:13:30.079719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:37.755 [2024-07-25 17:13:30.079931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:37.755 [2024-07-25 17:13:30.080061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:37.755 [2024-07-25 17:13:30.080127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:37.755 [2024-07-25 17:13:30.080191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:37.755 [2024-07-25 17:13:30.080329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:37.755 [2024-07-25 17:13:30.080392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:37.755 [2024-07-25 17:13:30.080519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:37.755 [2024-07-25 17:13:30.080596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.080697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.080826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.081071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.081217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:37.755 [2024-07-25 17:13:30.081238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:37.755 [2024-07-25 17:13:30.081257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.081271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.755 [2024-07-25 17:13:30.081294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:37.755 [2024-07-25 17:13:30.081307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:37.755 [2024-07-25 17:13:30.081324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:37.755 [2024-07-25 17:13:30.081339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.081356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:37.755 [2024-07-25 17:13:30.081385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:22:37.755 [2024-07-25 17:13:30.081411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.120578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.120668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:37.755 [2024-07-25 17:13:30.120696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.035 ms 00:22:37.755 [2024-07-25 17:13:30.120713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.120894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.120924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:37.755 [2024-07-25 17:13:30.120939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:37.755 [2024-07-25 17:13:30.120955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.164645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.164733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:37.755 [2024-07-25 17:13:30.164753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.620 ms 00:22:37.755 [2024-07-25 17:13:30.164770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.164901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.164930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:37.755 [2024-07-25 17:13:30.164946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:37.755 [2024-07-25 17:13:30.164963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.165865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.165932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:37.755 [2024-07-25 17:13:30.165948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:22:37.755 [2024-07-25 17:13:30.165964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.166189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.166217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:37.755 [2024-07-25 17:13:30.166230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:22:37.755 [2024-07-25 17:13:30.166246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.188598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.755 [2024-07-25 17:13:30.188672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:37.755 [2024-07-25 17:13:30.188690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.321 ms 00:22:37.755 [2024-07-25 17:13:30.188707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.755 [2024-07-25 17:13:30.204796] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:37.755 [2024-07-25 17:13:30.204846] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:37.755 [2024-07-25 17:13:30.204887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.756 [2024-07-25 17:13:30.204904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:37.756 [2024-07-25 17:13:30.204918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.034 ms 00:22:37.756 [2024-07-25 17:13:30.204934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.014 [2024-07-25 17:13:30.230896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.014 [2024-07-25 17:13:30.230964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:38.014 [2024-07-25 17:13:30.231028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.834 ms 00:22:38.014 [2024-07-25 17:13:30.231054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.014 [2024-07-25 17:13:30.244913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.014 [2024-07-25 17:13:30.245039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:38.014 [2024-07-25 17:13:30.245076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.751 ms 00:22:38.014 [2024-07-25 17:13:30.245099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.014 [2024-07-25 17:13:30.259327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.014 [2024-07-25 17:13:30.259413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:38.014 [2024-07-25 17:13:30.259432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.136 ms 00:22:38.014 [2024-07-25 17:13:30.259449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.014 [2024-07-25 17:13:30.260382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.014 [2024-07-25 17:13:30.260476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:38.014 [2024-07-25 17:13:30.260494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:22:38.014 [2024-07-25 17:13:30.260511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.361255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.361350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:38.015 [2024-07-25 17:13:30.361374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.709 ms 00:22:38.015 [2024-07-25 17:13:30.361392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.373408] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:38.015 [2024-07-25 17:13:30.399276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.399349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:38.015 [2024-07-25 17:13:30.399400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.686 ms 00:22:38.015 [2024-07-25 17:13:30.399414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.399566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.399588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:38.015 [2024-07-25 17:13:30.399607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:38.015 [2024-07-25 17:13:30.399621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.399713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.399731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:38.015 [2024-07-25 17:13:30.399756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:38.015 [2024-07-25 17:13:30.399768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.399809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.399825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:38.015 [2024-07-25 17:13:30.399843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:38.015 [2024-07-25 17:13:30.399856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.399909] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:38.015 [2024-07-25 17:13:30.399927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.399948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:38.015 [2024-07-25 17:13:30.399961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:38.015 [2024-07-25 17:13:30.400027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.429610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.429683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:38.015 [2024-07-25 17:13:30.429703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.543 ms 00:22:38.015 [2024-07-25 17:13:30.429721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.429848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.015 [2024-07-25 17:13:30.429886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:38.015 [2024-07-25 17:13:30.429906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:38.015 [2024-07-25 17:13:30.429922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.015 [2024-07-25 17:13:30.431610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:38.015 [2024-07-25 17:13:30.435590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.186 ms, result 0 00:22:38.015 [2024-07-25 17:13:30.436878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:38.015 Some configs were skipped because the RPC state that can call them passed over. 00:22:38.015 17:13:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:38.273 [2024-07-25 17:13:30.725665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.273 [2024-07-25 17:13:30.725728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:38.273 [2024-07-25 17:13:30.725780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:22:38.273 [2024-07-25 17:13:30.725795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.273 [2024-07-25 17:13:30.725911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.980 ms, result 0 00:22:38.273 true 00:22:38.532 17:13:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:38.789 [2024-07-25 17:13:31.009431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.790 [2024-07-25 17:13:31.009512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:38.790 [2024-07-25 17:13:31.009581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:22:38.790 [2024-07-25 17:13:31.009598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.790 [2024-07-25 17:13:31.009669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.544 ms, result 0 00:22:38.790 true 00:22:38.790 17:13:31 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79905 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79905 ']' 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79905 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79905 00:22:38.790 killing process with pid 79905 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79905' 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79905 00:22:38.790 17:13:31 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79905 00:22:39.725 [2024-07-25 17:13:31.962527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.962606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:39.725 [2024-07-25 17:13:31.962658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.725 [2024-07-25 17:13:31.962673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.962727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:39.725 [2024-07-25 17:13:31.966380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.966415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:39.725 [2024-07-25 17:13:31.966447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:22:39.725 [2024-07-25 17:13:31.966461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.966766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.966797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:39.725 [2024-07-25 17:13:31.966811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:39.725 [2024-07-25 17:13:31.966823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.970680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.970738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:39.725 [2024-07-25 17:13:31.970756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.834 ms 00:22:39.725 [2024-07-25 17:13:31.970770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.977306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.977364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:39.725 [2024-07-25 17:13:31.977381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.472 ms 00:22:39.725 [2024-07-25 17:13:31.977395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.988554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.988613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:39.725 [2024-07-25 17:13:31.988629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.072 ms 00:22:39.725 [2024-07-25 17:13:31.988642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.997764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.997814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:39.725 [2024-07-25 17:13:31.997847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.079 ms 00:22:39.725 [2024-07-25 17:13:31.997859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:31.998070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:31.998096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:39.725 [2024-07-25 17:13:31.998109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:39.725 [2024-07-25 17:13:31.998134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:32.009921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:32.009981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:39.725 [2024-07-25 17:13:32.010030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.763 ms 00:22:39.725 [2024-07-25 17:13:32.010046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:32.021636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:32.021695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:39.725 [2024-07-25 17:13:32.021711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.548 ms 00:22:39.725 [2024-07-25 17:13:32.021729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.725 [2024-07-25 17:13:32.032633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.725 [2024-07-25 17:13:32.032694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:39.726 [2024-07-25 17:13:32.032710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.861 ms 00:22:39.726 [2024-07-25 17:13:32.032723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.726 [2024-07-25 17:13:32.043588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.726 [2024-07-25 17:13:32.043648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:39.726 [2024-07-25 17:13:32.043664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.793 ms 00:22:39.726 [2024-07-25 17:13:32.043676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.726 [2024-07-25 17:13:32.043718] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:39.726 [2024-07-25 17:13:32.043745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.043974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:39.726 [2024-07-25 17:13:32.044845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.044992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:39.727 [2024-07-25 17:13:32.045191] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:39.727 [2024-07-25 17:13:32.045203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:39.727 [2024-07-25 17:13:32.045219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:39.727 [2024-07-25 17:13:32.045230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:39.727 [2024-07-25 17:13:32.045243] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:39.727 [2024-07-25 17:13:32.045254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:39.727 [2024-07-25 17:13:32.045267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:39.727 [2024-07-25 17:13:32.045278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:39.727 [2024-07-25 17:13:32.045291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:39.727 [2024-07-25 17:13:32.045301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:39.727 [2024-07-25 17:13:32.045326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:39.727 [2024-07-25 17:13:32.045337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.727 [2024-07-25 17:13:32.045350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:39.727 [2024-07-25 17:13:32.045363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:22:39.727 [2024-07-25 17:13:32.045379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.061455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.727 [2024-07-25 17:13:32.061540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:39.727 [2024-07-25 17:13:32.061557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.034 ms 00:22:39.727 [2024-07-25 17:13:32.061572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.062114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.727 [2024-07-25 17:13:32.062176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:39.727 [2024-07-25 17:13:32.062194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:22:39.727 [2024-07-25 17:13:32.062207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.114022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.727 [2024-07-25 17:13:32.114087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.727 [2024-07-25 17:13:32.114103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.727 [2024-07-25 17:13:32.114117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.114212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.727 [2024-07-25 17:13:32.114234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.727 [2024-07-25 17:13:32.114249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.727 [2024-07-25 17:13:32.114262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.114315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.727 [2024-07-25 17:13:32.114337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.727 [2024-07-25 17:13:32.114348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.727 [2024-07-25 17:13:32.114363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.727 [2024-07-25 17:13:32.114387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.727 [2024-07-25 17:13:32.114403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.727 [2024-07-25 17:13:32.114413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.727 [2024-07-25 17:13:32.114428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.203836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.203930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.986 [2024-07-25 17:13:32.203951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.203965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.279457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.279533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.986 [2024-07-25 17:13:32.279555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.279569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.279654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.279677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.986 [2024-07-25 17:13:32.279689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.279706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.279744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.279760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.986 [2024-07-25 17:13:32.279771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.279785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.279908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.279930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.986 [2024-07-25 17:13:32.279942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.279954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.280065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.280090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:39.986 [2024-07-25 17:13:32.280104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.280117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.280187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.280207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.986 [2024-07-25 17:13:32.280220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.280239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.280297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.986 [2024-07-25 17:13:32.280318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.986 [2024-07-25 17:13:32.280330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.986 [2024-07-25 17:13:32.280343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.986 [2024-07-25 17:13:32.280604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.054 ms, result 0 00:22:40.920 17:13:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:40.920 17:13:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:40.920 [2024-07-25 17:13:33.266714] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:40.920 [2024-07-25 17:13:33.266923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79969 ] 00:22:41.177 [2024-07-25 17:13:33.441157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.434 [2024-07-25 17:13:33.654050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.692 [2024-07-25 17:13:33.973292] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:41.692 [2024-07-25 17:13:33.973382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:41.692 [2024-07-25 17:13:34.135729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.692 [2024-07-25 17:13:34.135778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:41.692 [2024-07-25 17:13:34.135824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:41.692 [2024-07-25 17:13:34.135835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.692 [2024-07-25 17:13:34.139166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.692 [2024-07-25 17:13:34.139208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.692 [2024-07-25 17:13:34.139243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:22:41.692 [2024-07-25 17:13:34.139255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.692 [2024-07-25 17:13:34.139384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:41.692 [2024-07-25 17:13:34.140310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:41.692 [2024-07-25 17:13:34.140366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.692 [2024-07-25 17:13:34.140399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.692 [2024-07-25 17:13:34.140430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:22:41.692 [2024-07-25 17:13:34.140446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.692 [2024-07-25 17:13:34.142692] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:41.692 [2024-07-25 17:13:34.157202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.692 [2024-07-25 17:13:34.157243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:41.692 [2024-07-25 17:13:34.157283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.511 ms 00:22:41.692 [2024-07-25 17:13:34.157296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.692 [2024-07-25 17:13:34.157407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.692 [2024-07-25 17:13:34.157428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:41.692 [2024-07-25 17:13:34.157441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:41.692 [2024-07-25 17:13:34.157452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.950 [2024-07-25 17:13:34.166339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.950 [2024-07-25 17:13:34.166379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.951 [2024-07-25 17:13:34.166411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.808 ms 00:22:41.951 [2024-07-25 17:13:34.166421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.166533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.166554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.951 [2024-07-25 17:13:34.166566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:41.951 [2024-07-25 17:13:34.166576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.166616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.166660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:41.951 [2024-07-25 17:13:34.166678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:41.951 [2024-07-25 17:13:34.166689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.166725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:41.951 [2024-07-25 17:13:34.171229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.171265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.951 [2024-07-25 17:13:34.171297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.512 ms 00:22:41.951 [2024-07-25 17:13:34.171307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.171410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.171429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:41.951 [2024-07-25 17:13:34.171441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:41.951 [2024-07-25 17:13:34.171451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.171479] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:41.951 [2024-07-25 17:13:34.171509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:41.951 [2024-07-25 17:13:34.171550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:41.951 [2024-07-25 17:13:34.171570] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:41.951 [2024-07-25 17:13:34.171659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:41.951 [2024-07-25 17:13:34.171675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:41.951 [2024-07-25 17:13:34.171689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:41.951 [2024-07-25 17:13:34.171703] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:41.951 [2024-07-25 17:13:34.171714] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:41.951 [2024-07-25 17:13:34.171731] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:41.951 [2024-07-25 17:13:34.171742] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:41.951 [2024-07-25 17:13:34.171752] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:41.951 [2024-07-25 17:13:34.171762] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:41.951 [2024-07-25 17:13:34.171773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.171784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:41.951 [2024-07-25 17:13:34.171796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:41.951 [2024-07-25 17:13:34.171806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.171887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.951 [2024-07-25 17:13:34.171902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:41.951 [2024-07-25 17:13:34.171919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:41.951 [2024-07-25 17:13:34.171929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.951 [2024-07-25 17:13:34.172063] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:41.951 [2024-07-25 17:13:34.172084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:41.951 [2024-07-25 17:13:34.172095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:41.951 [2024-07-25 17:13:34.172127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:41.951 [2024-07-25 17:13:34.172157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.951 [2024-07-25 17:13:34.172176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:41.951 [2024-07-25 17:13:34.172185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:41.951 [2024-07-25 17:13:34.172194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:41.951 [2024-07-25 17:13:34.172204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:41.951 [2024-07-25 17:13:34.172214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:41.951 [2024-07-25 17:13:34.172223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:41.951 [2024-07-25 17:13:34.172245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:41.951 [2024-07-25 17:13:34.172288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:41.951 [2024-07-25 17:13:34.172317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:41.951 [2024-07-25 17:13:34.172345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:41.951 [2024-07-25 17:13:34.172374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:41.951 [2024-07-25 17:13:34.172413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:41.951 [2024-07-25 17:13:34.172422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.951 [2024-07-25 17:13:34.172448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:41.951 [2024-07-25 17:13:34.172457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:41.951 [2024-07-25 17:13:34.172466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:41.951 [2024-07-25 17:13:34.172475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:41.951 [2024-07-25 17:13:34.172484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:41.951 [2024-07-25 17:13:34.172494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:41.951 [2024-07-25 17:13:34.172512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:41.951 [2024-07-25 17:13:34.172523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.951 [2024-07-25 17:13:34.172532] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:41.951 [2024-07-25 17:13:34.172542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:41.952 [2024-07-25 17:13:34.172553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:41.952 [2024-07-25 17:13:34.172563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:41.952 [2024-07-25 17:13:34.172578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:41.952 [2024-07-25 17:13:34.172588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:41.952 [2024-07-25 17:13:34.172600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:41.952 [2024-07-25 17:13:34.172611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:41.952 [2024-07-25 17:13:34.172620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:41.952 [2024-07-25 17:13:34.172630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:41.952 [2024-07-25 17:13:34.172641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:41.952 [2024-07-25 17:13:34.172654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:41.952 [2024-07-25 17:13:34.172678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:41.952 [2024-07-25 17:13:34.172688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:41.952 [2024-07-25 17:13:34.172699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:41.952 [2024-07-25 17:13:34.172709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:41.952 [2024-07-25 17:13:34.172720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:41.952 [2024-07-25 17:13:34.172729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:41.952 [2024-07-25 17:13:34.172740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:41.952 [2024-07-25 17:13:34.172750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:41.952 [2024-07-25 17:13:34.172760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:41.952 [2024-07-25 17:13:34.172826] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:41.952 [2024-07-25 17:13:34.172838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:41.952 [2024-07-25 17:13:34.172861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:41.952 [2024-07-25 17:13:34.172872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:41.952 [2024-07-25 17:13:34.172882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:41.952 [2024-07-25 17:13:34.172894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.172905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:41.952 [2024-07-25 17:13:34.172915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:22:41.952 [2024-07-25 17:13:34.172926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.223209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.223482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.952 [2024-07-25 17:13:34.223662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.208 ms 00:22:41.952 [2024-07-25 17:13:34.223716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.224022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.224164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:41.952 [2024-07-25 17:13:34.224288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:41.952 [2024-07-25 17:13:34.224441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.264842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.265227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.952 [2024-07-25 17:13:34.265353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.317 ms 00:22:41.952 [2024-07-25 17:13:34.265403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.265750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.265819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.952 [2024-07-25 17:13:34.266038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:41.952 [2024-07-25 17:13:34.266093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.267008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.267176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.952 [2024-07-25 17:13:34.267287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:22:41.952 [2024-07-25 17:13:34.267335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.267594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.267646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.952 [2024-07-25 17:13:34.267788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:22:41.952 [2024-07-25 17:13:34.267840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.286443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.286657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.952 [2024-07-25 17:13:34.286797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.541 ms 00:22:41.952 [2024-07-25 17:13:34.286847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.302229] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:41.952 [2024-07-25 17:13:34.302450] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:41.952 [2024-07-25 17:13:34.302599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.302722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:41.952 [2024-07-25 17:13:34.302773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.561 ms 00:22:41.952 [2024-07-25 17:13:34.302877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.327943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.328179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:41.952 [2024-07-25 17:13:34.328297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.885 ms 00:22:41.952 [2024-07-25 17:13:34.328350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.341936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.342189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:41.952 [2024-07-25 17:13:34.342369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.440 ms 00:22:41.952 [2024-07-25 17:13:34.342420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.355864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.356091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:41.952 [2024-07-25 17:13:34.356120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.176 ms 00:22:41.952 [2024-07-25 17:13:34.356132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.952 [2024-07-25 17:13:34.357128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.952 [2024-07-25 17:13:34.357162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:41.952 [2024-07-25 17:13:34.357194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:22:41.952 [2024-07-25 17:13:34.357205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.427297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.427376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:42.210 [2024-07-25 17:13:34.427413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.058 ms 00:22:42.210 [2024-07-25 17:13:34.427425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.438194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:42.210 [2024-07-25 17:13:34.463207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.463282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:42.210 [2024-07-25 17:13:34.463319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.596 ms 00:22:42.210 [2024-07-25 17:13:34.463332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.463482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.463502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:42.210 [2024-07-25 17:13:34.463514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:42.210 [2024-07-25 17:13:34.463526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.463597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.463614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:42.210 [2024-07-25 17:13:34.463626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:42.210 [2024-07-25 17:13:34.463637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.463672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.463693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:42.210 [2024-07-25 17:13:34.463705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:42.210 [2024-07-25 17:13:34.463715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.463752] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:42.210 [2024-07-25 17:13:34.463769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.463780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:42.210 [2024-07-25 17:13:34.463792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:42.210 [2024-07-25 17:13:34.463803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.492224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.492272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:42.210 [2024-07-25 17:13:34.492304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.390 ms 00:22:42.210 [2024-07-25 17:13:34.492316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.492440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.210 [2024-07-25 17:13:34.492460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:42.210 [2024-07-25 17:13:34.492472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:42.210 [2024-07-25 17:13:34.492483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.210 [2024-07-25 17:13:34.493969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:42.210 [2024-07-25 17:13:34.497590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.795 ms, result 0 00:22:42.211 [2024-07-25 17:13:34.498551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.211 [2024-07-25 17:13:34.512972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.699  Copying: 24/256 [MB] (24 MBps) Copying: 46/256 [MB] (21 MBps) Copying: 67/256 [MB] (21 MBps) Copying: 89/256 [MB] (22 MBps) Copying: 111/256 [MB] (22 MBps) Copying: 133/256 [MB] (21 MBps) Copying: 154/256 [MB] (21 MBps) Copying: 176/256 [MB] (21 MBps) Copying: 198/256 [MB] (22 MBps) Copying: 221/256 [MB] (22 MBps) Copying: 242/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-25 17:13:46.112424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.699 [2024-07-25 17:13:46.125302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.699 [2024-07-25 17:13:46.125377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.699 [2024-07-25 17:13:46.125414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.699 [2024-07-25 17:13:46.125426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.699 [2024-07-25 17:13:46.125464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:53.699 [2024-07-25 17:13:46.129115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.699 [2024-07-25 17:13:46.129148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.699 [2024-07-25 17:13:46.129181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:22:53.699 [2024-07-25 17:13:46.129193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.699 [2024-07-25 17:13:46.129513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.699 [2024-07-25 17:13:46.129531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.699 [2024-07-25 17:13:46.129543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:53.699 [2024-07-25 17:13:46.129554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.699 [2024-07-25 17:13:46.133112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.699 [2024-07-25 17:13:46.133144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.699 [2024-07-25 17:13:46.133182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:22:53.699 [2024-07-25 17:13:46.133194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.699 [2024-07-25 17:13:46.140458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.699 [2024-07-25 17:13:46.140487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.699 [2024-07-25 17:13:46.140517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:22:53.699 [2024-07-25 17:13:46.140527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.168120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.168161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.958 [2024-07-25 17:13:46.168194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.529 ms 00:22:53.958 [2024-07-25 17:13:46.168205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.184612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.184653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.958 [2024-07-25 17:13:46.184685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.347 ms 00:22:53.958 [2024-07-25 17:13:46.184705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.184862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.184882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.958 [2024-07-25 17:13:46.184895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:22:53.958 [2024-07-25 17:13:46.184906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.212292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.212331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:53.958 [2024-07-25 17:13:46.212364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.362 ms 00:22:53.958 [2024-07-25 17:13:46.212375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.239208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.239247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:53.958 [2024-07-25 17:13:46.239279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.759 ms 00:22:53.958 [2024-07-25 17:13:46.239290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.958 [2024-07-25 17:13:46.265809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.958 [2024-07-25 17:13:46.265861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.959 [2024-07-25 17:13:46.265894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.456 ms 00:22:53.959 [2024-07-25 17:13:46.265905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.959 [2024-07-25 17:13:46.292574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.959 [2024-07-25 17:13:46.292612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.959 [2024-07-25 17:13:46.292644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.543 ms 00:22:53.959 [2024-07-25 17:13:46.292654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.959 [2024-07-25 17:13:46.292743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.959 [2024-07-25 17:13:46.292770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.292974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.959 [2024-07-25 17:13:46.293799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.293996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.294009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.294020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.960 [2024-07-25 17:13:46.294048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.960 [2024-07-25 17:13:46.294062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:53.960 [2024-07-25 17:13:46.294074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:53.960 [2024-07-25 17:13:46.294086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:53.960 [2024-07-25 17:13:46.294108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:53.960 [2024-07-25 17:13:46.294120] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:53.960 [2024-07-25 17:13:46.294130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.960 [2024-07-25 17:13:46.294141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.960 [2024-07-25 17:13:46.294152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.960 [2024-07-25 17:13:46.294164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.960 [2024-07-25 17:13:46.294173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.960 [2024-07-25 17:13:46.294184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.960 [2024-07-25 17:13:46.294195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.960 [2024-07-25 17:13:46.294212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:22:53.960 [2024-07-25 17:13:46.294223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.310099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.960 [2024-07-25 17:13:46.310155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.960 [2024-07-25 17:13:46.310190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.847 ms 00:22:53.960 [2024-07-25 17:13:46.310201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.310741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.960 [2024-07-25 17:13:46.310773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.960 [2024-07-25 17:13:46.310786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:22:53.960 [2024-07-25 17:13:46.310798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.348388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.960 [2024-07-25 17:13:46.348440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.960 [2024-07-25 17:13:46.348474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.960 [2024-07-25 17:13:46.348485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.348598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.960 [2024-07-25 17:13:46.348617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.960 [2024-07-25 17:13:46.348629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.960 [2024-07-25 17:13:46.348640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.348700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.960 [2024-07-25 17:13:46.348717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.960 [2024-07-25 17:13:46.348729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.960 [2024-07-25 17:13:46.348739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.960 [2024-07-25 17:13:46.348763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.960 [2024-07-25 17:13:46.348777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.960 [2024-07-25 17:13:46.348794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.960 [2024-07-25 17:13:46.348805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.442261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.442336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.219 [2024-07-25 17:13:46.442372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.442383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.219 [2024-07-25 17:13:46.515452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.515463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.219 [2024-07-25 17:13:46.515581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.515592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.219 [2024-07-25 17:13:46.515659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.515675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.219 [2024-07-25 17:13:46.515818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.515828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.219 [2024-07-25 17:13:46.515904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.515914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.515967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.515982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.219 [2024-07-25 17:13:46.516038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.516052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.516109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.219 [2024-07-25 17:13:46.516125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.219 [2024-07-25 17:13:46.516136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.219 [2024-07-25 17:13:46.516153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.219 [2024-07-25 17:13:46.516318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.005 ms, result 0 00:22:55.154 00:22:55.154 00:22:55.154 17:13:47 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:55.154 17:13:47 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:55.734 17:13:48 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:55.734 [2024-07-25 17:13:48.082888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:55.734 [2024-07-25 17:13:48.083114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80128 ] 00:22:56.022 [2024-07-25 17:13:48.239850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.022 [2024-07-25 17:13:48.438841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.590 [2024-07-25 17:13:48.762741] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.590 [2024-07-25 17:13:48.762831] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.590 [2024-07-25 17:13:48.924545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.590 [2024-07-25 17:13:48.924593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:56.590 [2024-07-25 17:13:48.924630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:56.590 [2024-07-25 17:13:48.924642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.590 [2024-07-25 17:13:48.928072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.590 [2024-07-25 17:13:48.928115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.590 [2024-07-25 17:13:48.928149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:22:56.590 [2024-07-25 17:13:48.928161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.590 [2024-07-25 17:13:48.928309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:56.590 [2024-07-25 17:13:48.929324] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:56.590 [2024-07-25 17:13:48.929395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.590 [2024-07-25 17:13:48.929410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.590 [2024-07-25 17:13:48.929438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:22:56.590 [2024-07-25 17:13:48.929449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.931588] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:56.591 [2024-07-25 17:13:48.947137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.947177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:56.591 [2024-07-25 17:13:48.947216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.550 ms 00:22:56.591 [2024-07-25 17:13:48.947228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.947337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.947358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:56.591 [2024-07-25 17:13:48.947385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:56.591 [2024-07-25 17:13:48.947396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.956208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.956249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.591 [2024-07-25 17:13:48.956281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:22:56.591 [2024-07-25 17:13:48.956291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.956410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.956430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.591 [2024-07-25 17:13:48.956451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:56.591 [2024-07-25 17:13:48.956462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.956512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.956529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.591 [2024-07-25 17:13:48.956545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:56.591 [2024-07-25 17:13:48.956555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.956586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:56.591 [2024-07-25 17:13:48.961321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.961516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.591 [2024-07-25 17:13:48.961631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:22:56.591 [2024-07-25 17:13:48.961736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.961863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.961927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.591 [2024-07-25 17:13:48.962070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:56.591 [2024-07-25 17:13:48.962206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.962290] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:56.591 [2024-07-25 17:13:48.962436] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:56.591 [2024-07-25 17:13:48.962611] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:56.591 [2024-07-25 17:13:48.962665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:56.591 [2024-07-25 17:13:48.962765] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.591 [2024-07-25 17:13:48.962781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.591 [2024-07-25 17:13:48.962796] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:56.591 [2024-07-25 17:13:48.962811] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.591 [2024-07-25 17:13:48.962824] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.591 [2024-07-25 17:13:48.962849] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:56.591 [2024-07-25 17:13:48.962860] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.591 [2024-07-25 17:13:48.962871] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.591 [2024-07-25 17:13:48.962881] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.591 [2024-07-25 17:13:48.962893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.962905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.591 [2024-07-25 17:13:48.962917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:22:56.591 [2024-07-25 17:13:48.962928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.963088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.591 [2024-07-25 17:13:48.963106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.591 [2024-07-25 17:13:48.963123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:56.591 [2024-07-25 17:13:48.963134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.591 [2024-07-25 17:13:48.963239] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.591 [2024-07-25 17:13:48.963256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.591 [2024-07-25 17:13:48.963269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.591 [2024-07-25 17:13:48.963301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.591 [2024-07-25 17:13:48.963332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.591 [2024-07-25 17:13:48.963352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.591 [2024-07-25 17:13:48.963362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:56.591 [2024-07-25 17:13:48.963387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.591 [2024-07-25 17:13:48.963397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.591 [2024-07-25 17:13:48.963406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:56.591 [2024-07-25 17:13:48.963415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.591 [2024-07-25 17:13:48.963437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.591 [2024-07-25 17:13:48.963480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.591 [2024-07-25 17:13:48.963510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.591 [2024-07-25 17:13:48.963542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.591 [2024-07-25 17:13:48.963573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.591 [2024-07-25 17:13:48.963602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.591 [2024-07-25 17:13:48.963621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.591 [2024-07-25 17:13:48.963631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:56.591 [2024-07-25 17:13:48.963641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.591 [2024-07-25 17:13:48.963651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.591 [2024-07-25 17:13:48.963661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:56.591 [2024-07-25 17:13:48.963670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.591 [2024-07-25 17:13:48.963690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:56.591 [2024-07-25 17:13:48.963700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963709] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.591 [2024-07-25 17:13:48.963720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.591 [2024-07-25 17:13:48.963730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.591 [2024-07-25 17:13:48.963740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.591 [2024-07-25 17:13:48.963757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.591 [2024-07-25 17:13:48.963768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.591 [2024-07-25 17:13:48.963778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.591 [2024-07-25 17:13:48.963788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.591 [2024-07-25 17:13:48.963797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.592 [2024-07-25 17:13:48.963808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.592 [2024-07-25 17:13:48.963821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.592 [2024-07-25 17:13:48.963834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.963847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:56.592 [2024-07-25 17:13:48.963858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:56.592 [2024-07-25 17:13:48.963870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:56.592 [2024-07-25 17:13:48.963881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:56.592 [2024-07-25 17:13:48.963892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:56.592 [2024-07-25 17:13:48.963903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:56.592 [2024-07-25 17:13:48.963914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:56.592 [2024-07-25 17:13:48.963925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:56.592 [2024-07-25 17:13:48.963935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:56.592 [2024-07-25 17:13:48.963946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.963957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.963967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.963978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.963989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:56.592 [2024-07-25 17:13:48.963999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.592 [2024-07-25 17:13:48.964026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.964040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.592 [2024-07-25 17:13:48.964052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.592 [2024-07-25 17:13:48.964062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.592 [2024-07-25 17:13:48.964073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.592 [2024-07-25 17:13:48.964085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:48.964096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.592 [2024-07-25 17:13:48.964108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:22:56.592 [2024-07-25 17:13:48.964118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.008475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.008754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.592 [2024-07-25 17:13:49.008876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.284 ms 00:22:56.592 [2024-07-25 17:13:49.009005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.009237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.009381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:56.592 [2024-07-25 17:13:49.009497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:56.592 [2024-07-25 17:13:49.009543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.049684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.049876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.592 [2024-07-25 17:13:49.050038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.027 ms 00:22:56.592 [2024-07-25 17:13:49.050145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.050412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.050529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.592 [2024-07-25 17:13:49.050675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.592 [2024-07-25 17:13:49.050800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.051524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.051646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.592 [2024-07-25 17:13:49.051765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:22:56.592 [2024-07-25 17:13:49.051811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.592 [2024-07-25 17:13:49.052096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.592 [2024-07-25 17:13:49.052220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.592 [2024-07-25 17:13:49.052320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:22:56.592 [2024-07-25 17:13:49.052367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.070794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.070954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.851 [2024-07-25 17:13:49.071162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:22:56.851 [2024-07-25 17:13:49.071219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.086713] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:56.851 [2024-07-25 17:13:49.086885] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:56.851 [2024-07-25 17:13:49.086925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.086938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:56.851 [2024-07-25 17:13:49.086966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.440 ms 00:22:56.851 [2024-07-25 17:13:49.086979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.113104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.113153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:56.851 [2024-07-25 17:13:49.113188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.997 ms 00:22:56.851 [2024-07-25 17:13:49.113199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.128013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.128053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:56.851 [2024-07-25 17:13:49.128085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.678 ms 00:22:56.851 [2024-07-25 17:13:49.128096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.141658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.141708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:56.851 [2024-07-25 17:13:49.141740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.472 ms 00:22:56.851 [2024-07-25 17:13:49.141750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.142914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.142951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:56.851 [2024-07-25 17:13:49.142998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:22:56.851 [2024-07-25 17:13:49.143028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.213912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.214015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:56.851 [2024-07-25 17:13:49.214052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.830 ms 00:22:56.851 [2024-07-25 17:13:49.214064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.225321] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:56.851 [2024-07-25 17:13:49.247784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.247844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:56.851 [2024-07-25 17:13:49.247878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.560 ms 00:22:56.851 [2024-07-25 17:13:49.247890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.248088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.248108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:56.851 [2024-07-25 17:13:49.248123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:56.851 [2024-07-25 17:13:49.248134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.248205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.851 [2024-07-25 17:13:49.248221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:56.851 [2024-07-25 17:13:49.248234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:56.851 [2024-07-25 17:13:49.248244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.851 [2024-07-25 17:13:49.248278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.852 [2024-07-25 17:13:49.248298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:56.852 [2024-07-25 17:13:49.248310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:56.852 [2024-07-25 17:13:49.248321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.852 [2024-07-25 17:13:49.248360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:56.852 [2024-07-25 17:13:49.248375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.852 [2024-07-25 17:13:49.248386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:56.852 [2024-07-25 17:13:49.248412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:56.852 [2024-07-25 17:13:49.248423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.852 [2024-07-25 17:13:49.276148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.852 [2024-07-25 17:13:49.276195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:56.852 [2024-07-25 17:13:49.276227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.679 ms 00:22:56.852 [2024-07-25 17:13:49.276238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.852 [2024-07-25 17:13:49.276356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.852 [2024-07-25 17:13:49.276374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:56.852 [2024-07-25 17:13:49.276387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:56.852 [2024-07-25 17:13:49.276397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.852 [2024-07-25 17:13:49.277817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.852 [2024-07-25 17:13:49.281660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.865 ms, result 0 00:22:56.852 [2024-07-25 17:13:49.282738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.852 [2024-07-25 17:13:49.297222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:57.110  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-25 17:13:49.475955] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:57.110 [2024-07-25 17:13:49.486333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.486372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:57.110 [2024-07-25 17:13:49.486404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:57.110 [2024-07-25 17:13:49.486414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.486448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:57.110 [2024-07-25 17:13:49.489809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.489840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:57.110 [2024-07-25 17:13:49.489869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:22:57.110 [2024-07-25 17:13:49.489880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.491670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.491708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:57.110 [2024-07-25 17:13:49.491738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:22:57.110 [2024-07-25 17:13:49.491748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.495365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.495403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:57.110 [2024-07-25 17:13:49.495442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.595 ms 00:22:57.110 [2024-07-25 17:13:49.495453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.501746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.501778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:57.110 [2024-07-25 17:13:49.501807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.254 ms 00:22:57.110 [2024-07-25 17:13:49.501817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.527685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.527724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:57.110 [2024-07-25 17:13:49.527755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:22:57.110 [2024-07-25 17:13:49.527764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.543775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.543814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:57.110 [2024-07-25 17:13:49.543846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.955 ms 00:22:57.110 [2024-07-25 17:13:49.543862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.544041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.544069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:57.110 [2024-07-25 17:13:49.544081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:22:57.110 [2024-07-25 17:13:49.544091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.110 [2024-07-25 17:13:49.570371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.110 [2024-07-25 17:13:49.570408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:57.110 [2024-07-25 17:13:49.570438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.242 ms 00:22:57.110 [2024-07-25 17:13:49.570448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.369 [2024-07-25 17:13:49.597192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.369 [2024-07-25 17:13:49.597231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:57.369 [2024-07-25 17:13:49.597262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.667 ms 00:22:57.369 [2024-07-25 17:13:49.597272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.369 [2024-07-25 17:13:49.623440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.369 [2024-07-25 17:13:49.623478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:57.369 [2024-07-25 17:13:49.623509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.111 ms 00:22:57.369 [2024-07-25 17:13:49.623527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.370 [2024-07-25 17:13:49.649695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.370 [2024-07-25 17:13:49.649734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:57.370 [2024-07-25 17:13:49.649765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.085 ms 00:22:57.370 [2024-07-25 17:13:49.649774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.370 [2024-07-25 17:13:49.649831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:57.370 [2024-07-25 17:13:49.649853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.649974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:57.370 [2024-07-25 17:13:49.650834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.650993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.651006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.651017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.651028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:57.371 [2024-07-25 17:13:49.651048] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:57.371 [2024-07-25 17:13:49.651058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:22:57.371 [2024-07-25 17:13:49.651070] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:57.371 [2024-07-25 17:13:49.651080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:57.371 [2024-07-25 17:13:49.651108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:57.371 [2024-07-25 17:13:49.651119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:57.371 [2024-07-25 17:13:49.651129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:57.371 [2024-07-25 17:13:49.651139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:57.371 [2024-07-25 17:13:49.651149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:57.371 [2024-07-25 17:13:49.651158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:57.371 [2024-07-25 17:13:49.651168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:57.371 [2024-07-25 17:13:49.651178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.371 [2024-07-25 17:13:49.651189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:57.371 [2024-07-25 17:13:49.651205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:22:57.371 [2024-07-25 17:13:49.651216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.666858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.371 [2024-07-25 17:13:49.666894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:57.371 [2024-07-25 17:13:49.666925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.618 ms 00:22:57.371 [2024-07-25 17:13:49.666935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.667524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.371 [2024-07-25 17:13:49.667558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:57.371 [2024-07-25 17:13:49.667577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:22:57.371 [2024-07-25 17:13:49.667588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.705980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.371 [2024-07-25 17:13:49.706055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.371 [2024-07-25 17:13:49.706090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.371 [2024-07-25 17:13:49.706101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.706238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.371 [2024-07-25 17:13:49.706256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.371 [2024-07-25 17:13:49.706269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.371 [2024-07-25 17:13:49.706280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.706354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.371 [2024-07-25 17:13:49.706372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.371 [2024-07-25 17:13:49.706385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.371 [2024-07-25 17:13:49.706395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.706420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.371 [2024-07-25 17:13:49.706455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.371 [2024-07-25 17:13:49.706467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.371 [2024-07-25 17:13:49.706478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.371 [2024-07-25 17:13:49.801312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.371 [2024-07-25 17:13:49.801391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.371 [2024-07-25 17:13:49.801427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.371 [2024-07-25 17:13:49.801438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.629 [2024-07-25 17:13:49.879300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.629 [2024-07-25 17:13:49.879449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.629 [2024-07-25 17:13:49.879519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.629 [2024-07-25 17:13:49.879684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:57.629 [2024-07-25 17:13:49.879787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.629 [2024-07-25 17:13:49.879877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.879888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.629 [2024-07-25 17:13:49.879943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.629 [2024-07-25 17:13:49.879958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.629 [2024-07-25 17:13:49.879970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.629 [2024-07-25 17:13:49.880051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.630 [2024-07-25 17:13:49.880259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.897 ms, result 0 00:22:58.574 00:22:58.574 00:22:58.574 17:13:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80153 00:22:58.574 17:13:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:58.574 17:13:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80153 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80153 ']' 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.574 17:13:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:58.574 [2024-07-25 17:13:50.990350] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:58.574 [2024-07-25 17:13:50.990525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80153 ] 00:22:58.837 [2024-07-25 17:13:51.157347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.095 [2024-07-25 17:13:51.352441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.659 17:13:52 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.659 17:13:52 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:59.659 17:13:52 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:59.917 [2024-07-25 17:13:52.325871] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.917 [2024-07-25 17:13:52.325958] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.175 [2024-07-25 17:13:52.502058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.502124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.175 [2024-07-25 17:13:52.502143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:00.175 [2024-07-25 17:13:52.502156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.505193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.505250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.175 [2024-07-25 17:13:52.505266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:23:00.175 [2024-07-25 17:13:52.505278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.505417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.175 [2024-07-25 17:13:52.506374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.175 [2024-07-25 17:13:52.506441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.506457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.175 [2024-07-25 17:13:52.506469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:23:00.175 [2024-07-25 17:13:52.506485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.508630] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:00.175 [2024-07-25 17:13:52.522851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.522892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:00.175 [2024-07-25 17:13:52.522927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.217 ms 00:23:00.175 [2024-07-25 17:13:52.522938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.523079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.523101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:00.175 [2024-07-25 17:13:52.523116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:00.175 [2024-07-25 17:13:52.523126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.531895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.531936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.175 [2024-07-25 17:13:52.531973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:23:00.175 [2024-07-25 17:13:52.531985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.532195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.532215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.175 [2024-07-25 17:13:52.532230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:00.175 [2024-07-25 17:13:52.532246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.532287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.532302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.175 [2024-07-25 17:13:52.532315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:00.175 [2024-07-25 17:13:52.532326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.532362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:00.175 [2024-07-25 17:13:52.537049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.537088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.175 [2024-07-25 17:13:52.537119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:23:00.175 [2024-07-25 17:13:52.537131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.537208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.537232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.175 [2024-07-25 17:13:52.537247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:00.175 [2024-07-25 17:13:52.537259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.537297] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:00.175 [2024-07-25 17:13:52.537324] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:00.175 [2024-07-25 17:13:52.537369] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:00.175 [2024-07-25 17:13:52.537394] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:00.175 [2024-07-25 17:13:52.537486] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:00.175 [2024-07-25 17:13:52.537510] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.175 [2024-07-25 17:13:52.537524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:00.175 [2024-07-25 17:13:52.537540] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.175 [2024-07-25 17:13:52.537552] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.175 [2024-07-25 17:13:52.537565] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:00.175 [2024-07-25 17:13:52.537576] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.175 [2024-07-25 17:13:52.537588] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:00.175 [2024-07-25 17:13:52.537598] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:00.175 [2024-07-25 17:13:52.537614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.537624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.175 [2024-07-25 17:13:52.537636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:00.175 [2024-07-25 17:13:52.537649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.537738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.175 [2024-07-25 17:13:52.537751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.175 [2024-07-25 17:13:52.537764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:00.175 [2024-07-25 17:13:52.537774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.175 [2024-07-25 17:13:52.537880] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.175 [2024-07-25 17:13:52.537898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.175 [2024-07-25 17:13:52.537912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.175 [2024-07-25 17:13:52.537923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.175 [2024-07-25 17:13:52.537941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.175 [2024-07-25 17:13:52.537951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.537962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:00.176 [2024-07-25 17:13:52.537973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.176 [2024-07-25 17:13:52.537988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.176 [2024-07-25 17:13:52.538050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.176 [2024-07-25 17:13:52.538061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:00.176 [2024-07-25 17:13:52.538072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.176 [2024-07-25 17:13:52.538084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.176 [2024-07-25 17:13:52.538097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:00.176 [2024-07-25 17:13:52.538107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.176 [2024-07-25 17:13:52.538146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.176 [2024-07-25 17:13:52.538181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.176 [2024-07-25 17:13:52.538215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.176 [2024-07-25 17:13:52.538251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.176 [2024-07-25 17:13:52.538297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.176 [2024-07-25 17:13:52.538348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.176 [2024-07-25 17:13:52.538372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.176 [2024-07-25 17:13:52.538383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:00.176 [2024-07-25 17:13:52.538395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.176 [2024-07-25 17:13:52.538406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:00.176 [2024-07-25 17:13:52.538419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:00.176 [2024-07-25 17:13:52.538444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:00.176 [2024-07-25 17:13:52.538471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:00.176 [2024-07-25 17:13:52.538483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538493] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.176 [2024-07-25 17:13:52.538507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.176 [2024-07-25 17:13:52.538519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.176 [2024-07-25 17:13:52.538554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.176 [2024-07-25 17:13:52.538567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.176 [2024-07-25 17:13:52.538578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.176 [2024-07-25 17:13:52.538590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.176 [2024-07-25 17:13:52.538601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.176 [2024-07-25 17:13:52.538613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.176 [2024-07-25 17:13:52.538626] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.176 [2024-07-25 17:13:52.538672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:00.176 [2024-07-25 17:13:52.538713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:00.176 [2024-07-25 17:13:52.538725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:00.176 [2024-07-25 17:13:52.538738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:00.176 [2024-07-25 17:13:52.538750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:00.176 [2024-07-25 17:13:52.538764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:00.176 [2024-07-25 17:13:52.538775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:00.176 [2024-07-25 17:13:52.538789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:00.176 [2024-07-25 17:13:52.538800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:00.176 [2024-07-25 17:13:52.538813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:00.176 [2024-07-25 17:13:52.538875] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.176 [2024-07-25 17:13:52.538890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.176 [2024-07-25 17:13:52.538920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.176 [2024-07-25 17:13:52.538931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.176 [2024-07-25 17:13:52.538945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.176 [2024-07-25 17:13:52.538958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.538973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.176 [2024-07-25 17:13:52.538986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:23:00.176 [2024-07-25 17:13:52.539020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.575311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.575384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.176 [2024-07-25 17:13:52.575406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.207 ms 00:23:00.176 [2024-07-25 17:13:52.575420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.575582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.575603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.176 [2024-07-25 17:13:52.575616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:00.176 [2024-07-25 17:13:52.575629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.611927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.612020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.176 [2024-07-25 17:13:52.612053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.269 ms 00:23:00.176 [2024-07-25 17:13:52.612067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.612172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.612194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.176 [2024-07-25 17:13:52.612207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:00.176 [2024-07-25 17:13:52.612220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.612871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.612911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.176 [2024-07-25 17:13:52.612925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:23:00.176 [2024-07-25 17:13:52.612938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.613122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.613142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.176 [2024-07-25 17:13:52.613154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:00.176 [2024-07-25 17:13:52.613167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.176 [2024-07-25 17:13:52.632133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.176 [2024-07-25 17:13:52.632197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.177 [2024-07-25 17:13:52.632213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.939 ms 00:23:00.177 [2024-07-25 17:13:52.632227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.646914] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:00.435 [2024-07-25 17:13:52.646975] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:00.435 [2024-07-25 17:13:52.647041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.647057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:00.435 [2024-07-25 17:13:52.647070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.669 ms 00:23:00.435 [2024-07-25 17:13:52.647083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.671581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.671638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:00.435 [2024-07-25 17:13:52.671654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.389 ms 00:23:00.435 [2024-07-25 17:13:52.671670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.684530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.684587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:00.435 [2024-07-25 17:13:52.684611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:23:00.435 [2024-07-25 17:13:52.684627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.697549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.697601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:00.435 [2024-07-25 17:13:52.697616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.845 ms 00:23:00.435 [2024-07-25 17:13:52.697628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.698459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.698533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.435 [2024-07-25 17:13:52.698547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:23:00.435 [2024-07-25 17:13:52.698563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.773662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.773753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.435 [2024-07-25 17:13:52.773774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.067 ms 00:23:00.435 [2024-07-25 17:13:52.773788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.784077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:00.435 [2024-07-25 17:13:52.802433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.802488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.435 [2024-07-25 17:13:52.802526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.520 ms 00:23:00.435 [2024-07-25 17:13:52.802537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.802675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.802694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.435 [2024-07-25 17:13:52.802709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:00.435 [2024-07-25 17:13:52.802721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.802793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.802807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.435 [2024-07-25 17:13:52.802824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:00.435 [2024-07-25 17:13:52.802835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.802870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.802883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.435 [2024-07-25 17:13:52.802896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:00.435 [2024-07-25 17:13:52.802907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.802977] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.435 [2024-07-25 17:13:52.802992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.803050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.435 [2024-07-25 17:13:52.803082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:00.435 [2024-07-25 17:13:52.803099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.829343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.829403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.435 [2024-07-25 17:13:52.829420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.213 ms 00:23:00.435 [2024-07-25 17:13:52.829433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.829549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.435 [2024-07-25 17:13:52.829574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.435 [2024-07-25 17:13:52.829589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:00.435 [2024-07-25 17:13:52.829601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.435 [2024-07-25 17:13:52.831021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.435 [2024-07-25 17:13:52.834476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.434 ms, result 0 00:23:00.435 [2024-07-25 17:13:52.835624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:00.435 Some configs were skipped because the RPC state that can call them passed over. 00:23:00.435 17:13:52 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:00.693 [2024-07-25 17:13:53.110957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.693 [2024-07-25 17:13:53.111252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:00.693 [2024-07-25 17:13:53.111387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:23:00.693 [2024-07-25 17:13:53.111440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.693 [2024-07-25 17:13:53.111568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.096 ms, result 0 00:23:00.693 true 00:23:00.693 17:13:53 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:00.977 [2024-07-25 17:13:53.355143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.978 [2024-07-25 17:13:53.355368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:00.978 [2024-07-25 17:13:53.355507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:23:00.978 [2024-07-25 17:13:53.355561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.978 [2024-07-25 17:13:53.355733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.933 ms, result 0 00:23:00.978 true 00:23:00.978 17:13:53 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80153 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80153 ']' 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80153 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80153 00:23:00.978 killing process with pid 80153 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80153' 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80153 00:23:00.978 17:13:53 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80153 00:23:01.936 [2024-07-25 17:13:54.294435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.294504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:01.936 [2024-07-25 17:13:54.294544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:01.936 [2024-07-25 17:13:54.294558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.294590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:01.936 [2024-07-25 17:13:54.297897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.297948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:01.936 [2024-07-25 17:13:54.297970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:23:01.936 [2024-07-25 17:13:54.298000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.298337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.298365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:01.936 [2024-07-25 17:13:54.298394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:23:01.936 [2024-07-25 17:13:54.298408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.302037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.302102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:01.936 [2024-07-25 17:13:54.302136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:23:01.936 [2024-07-25 17:13:54.302164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.308638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.308697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:01.936 [2024-07-25 17:13:54.308713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.413 ms 00:23:01.936 [2024-07-25 17:13:54.308725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.319607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.319664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:01.936 [2024-07-25 17:13:54.319680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.831 ms 00:23:01.936 [2024-07-25 17:13:54.319694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.328206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.328256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:01.936 [2024-07-25 17:13:54.328287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.472 ms 00:23:01.936 [2024-07-25 17:13:54.328299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.328442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.328463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:01.936 [2024-07-25 17:13:54.328475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:01.936 [2024-07-25 17:13:54.328499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.341586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.341624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:01.936 [2024-07-25 17:13:54.341638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:23:01.936 [2024-07-25 17:13:54.341651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.353264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.936 [2024-07-25 17:13:54.353305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:01.936 [2024-07-25 17:13:54.353322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.574 ms 00:23:01.936 [2024-07-25 17:13:54.353341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.936 [2024-07-25 17:13:54.364528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.937 [2024-07-25 17:13:54.364582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:01.937 [2024-07-25 17:13:54.364595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.143 ms 00:23:01.937 [2024-07-25 17:13:54.364607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.937 [2024-07-25 17:13:54.375069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.937 [2024-07-25 17:13:54.375139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:01.937 [2024-07-25 17:13:54.375162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.389 ms 00:23:01.937 [2024-07-25 17:13:54.375175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.937 [2024-07-25 17:13:54.375213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:01.937 [2024-07-25 17:13:54.375237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.375975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:01.937 [2024-07-25 17:13:54.376156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:01.938 [2024-07-25 17:13:54.376541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:01.938 [2024-07-25 17:13:54.376552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:23:01.938 [2024-07-25 17:13:54.376567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:01.938 [2024-07-25 17:13:54.376576] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:01.938 [2024-07-25 17:13:54.376588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:01.938 [2024-07-25 17:13:54.376598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:01.938 [2024-07-25 17:13:54.376609] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:01.938 [2024-07-25 17:13:54.376619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:01.938 [2024-07-25 17:13:54.376631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:01.938 [2024-07-25 17:13:54.376640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:01.938 [2024-07-25 17:13:54.376663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:01.938 [2024-07-25 17:13:54.376673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.938 [2024-07-25 17:13:54.376685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:01.938 [2024-07-25 17:13:54.376697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:23:01.938 [2024-07-25 17:13:54.376713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.938 [2024-07-25 17:13:54.391052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.938 [2024-07-25 17:13:54.391103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:01.938 [2024-07-25 17:13:54.391118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.299 ms 00:23:01.938 [2024-07-25 17:13:54.391134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.938 [2024-07-25 17:13:54.391601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.938 [2024-07-25 17:13:54.391660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:01.938 [2024-07-25 17:13:54.391679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:23:01.938 [2024-07-25 17:13:54.391692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.437972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.438038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.197 [2024-07-25 17:13:54.438053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.438066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.438159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.438178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.197 [2024-07-25 17:13:54.438193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.438205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.438268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.438288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.197 [2024-07-25 17:13:54.438299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.438314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.438336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.438350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.197 [2024-07-25 17:13:54.438361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.438376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.519261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.519331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.197 [2024-07-25 17:13:54.519347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.519360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.197 [2024-07-25 17:13:54.595271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.197 [2024-07-25 17:13:54.595421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.197 [2024-07-25 17:13:54.595498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.197 [2024-07-25 17:13:54.595658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.197 [2024-07-25 17:13:54.595747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.197 [2024-07-25 17:13:54.595837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.595905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.197 [2024-07-25 17:13:54.595924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.197 [2024-07-25 17:13:54.595936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.197 [2024-07-25 17:13:54.595959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.197 [2024-07-25 17:13:54.596175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 301.721 ms, result 0 00:23:03.129 17:13:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:03.129 [2024-07-25 17:13:55.514689] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:03.129 [2024-07-25 17:13:55.514871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80217 ] 00:23:03.386 [2024-07-25 17:13:55.683679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.644 [2024-07-25 17:13:55.885957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.902 [2024-07-25 17:13:56.195463] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.902 [2024-07-25 17:13:56.195526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:03.902 [2024-07-25 17:13:56.355704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.902 [2024-07-25 17:13:56.355759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:03.902 [2024-07-25 17:13:56.355778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:03.902 [2024-07-25 17:13:56.355788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.902 [2024-07-25 17:13:56.358915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.902 [2024-07-25 17:13:56.358981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.902 [2024-07-25 17:13:56.359012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.101 ms 00:23:03.902 [2024-07-25 17:13:56.359041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.902 [2024-07-25 17:13:56.359245] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:03.902 [2024-07-25 17:13:56.360238] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:03.902 [2024-07-25 17:13:56.360288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.902 [2024-07-25 17:13:56.360310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.902 [2024-07-25 17:13:56.360322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:23:03.902 [2024-07-25 17:13:56.360332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.902 [2024-07-25 17:13:56.362422] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:04.161 [2024-07-25 17:13:56.377319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.161 [2024-07-25 17:13:56.377369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:04.161 [2024-07-25 17:13:56.377389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.899 ms 00:23:04.161 [2024-07-25 17:13:56.377400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.161 [2024-07-25 17:13:56.377504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.161 [2024-07-25 17:13:56.377523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:04.161 [2024-07-25 17:13:56.377535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:04.161 [2024-07-25 17:13:56.377544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.161 [2024-07-25 17:13:56.386360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.161 [2024-07-25 17:13:56.386406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.161 [2024-07-25 17:13:56.386419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.733 ms 00:23:04.161 [2024-07-25 17:13:56.386430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.161 [2024-07-25 17:13:56.386544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.161 [2024-07-25 17:13:56.386563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.161 [2024-07-25 17:13:56.386574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:04.162 [2024-07-25 17:13:56.386584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.386624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.162 [2024-07-25 17:13:56.386666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.162 [2024-07-25 17:13:56.386688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:04.162 [2024-07-25 17:13:56.386698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.386732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:04.162 [2024-07-25 17:13:56.391249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.162 [2024-07-25 17:13:56.391278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.162 [2024-07-25 17:13:56.391292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.526 ms 00:23:04.162 [2024-07-25 17:13:56.391302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.391383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.162 [2024-07-25 17:13:56.391401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.162 [2024-07-25 17:13:56.391412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:04.162 [2024-07-25 17:13:56.391422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.391450] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:04.162 [2024-07-25 17:13:56.391481] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:04.162 [2024-07-25 17:13:56.391522] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:04.162 [2024-07-25 17:13:56.391541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:04.162 [2024-07-25 17:13:56.391632] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.162 [2024-07-25 17:13:56.391647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.162 [2024-07-25 17:13:56.391660] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:04.162 [2024-07-25 17:13:56.391673] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.162 [2024-07-25 17:13:56.391685] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.162 [2024-07-25 17:13:56.391701] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:04.162 [2024-07-25 17:13:56.391711] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.162 [2024-07-25 17:13:56.391721] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.162 [2024-07-25 17:13:56.391731] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.162 [2024-07-25 17:13:56.391741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.162 [2024-07-25 17:13:56.391752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.162 [2024-07-25 17:13:56.391763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:04.162 [2024-07-25 17:13:56.391772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.391855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.162 [2024-07-25 17:13:56.391869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.162 [2024-07-25 17:13:56.391884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:04.162 [2024-07-25 17:13:56.391894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.162 [2024-07-25 17:13:56.392006] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.162 [2024-07-25 17:13:56.392039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.162 [2024-07-25 17:13:56.392051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.162 [2024-07-25 17:13:56.392082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392092] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.162 [2024-07-25 17:13:56.392113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.162 [2024-07-25 17:13:56.392132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.162 [2024-07-25 17:13:56.392142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:04.162 [2024-07-25 17:13:56.392151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.162 [2024-07-25 17:13:56.392160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.162 [2024-07-25 17:13:56.392171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:04.162 [2024-07-25 17:13:56.392181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.162 [2024-07-25 17:13:56.392201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.162 [2024-07-25 17:13:56.392244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.162 [2024-07-25 17:13:56.392274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.162 [2024-07-25 17:13:56.392302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.162 [2024-07-25 17:13:56.392330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.162 [2024-07-25 17:13:56.392359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.162 [2024-07-25 17:13:56.392394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.162 [2024-07-25 17:13:56.392403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:04.162 [2024-07-25 17:13:56.392412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.162 [2024-07-25 17:13:56.392422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.162 [2024-07-25 17:13:56.392431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:04.162 [2024-07-25 17:13:56.392440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.162 [2024-07-25 17:13:56.392459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:04.162 [2024-07-25 17:13:56.392469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392478] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.162 [2024-07-25 17:13:56.392501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.162 [2024-07-25 17:13:56.392511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.162 [2024-07-25 17:13:56.392539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.162 [2024-07-25 17:13:56.392549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.162 [2024-07-25 17:13:56.392558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.162 [2024-07-25 17:13:56.392568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.162 [2024-07-25 17:13:56.392577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.162 [2024-07-25 17:13:56.392587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.162 [2024-07-25 17:13:56.392599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.162 [2024-07-25 17:13:56.392611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.162 [2024-07-25 17:13:56.392623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:04.162 [2024-07-25 17:13:56.392633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:04.162 [2024-07-25 17:13:56.392644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:04.162 [2024-07-25 17:13:56.392654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:04.162 [2024-07-25 17:13:56.392665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:04.162 [2024-07-25 17:13:56.392675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:04.162 [2024-07-25 17:13:56.392685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:04.162 [2024-07-25 17:13:56.392695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:04.163 [2024-07-25 17:13:56.392706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:04.163 [2024-07-25 17:13:56.392716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:04.163 [2024-07-25 17:13:56.392765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.163 [2024-07-25 17:13:56.392776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.163 [2024-07-25 17:13:56.392799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.163 [2024-07-25 17:13:56.392809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.163 [2024-07-25 17:13:56.392820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.163 [2024-07-25 17:13:56.392831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.392841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.163 [2024-07-25 17:13:56.392852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:23:04.163 [2024-07-25 17:13:56.392863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.437937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.438016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.163 [2024-07-25 17:13:56.438042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.006 ms 00:23:04.163 [2024-07-25 17:13:56.438053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.438242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.438261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:04.163 [2024-07-25 17:13:56.438280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:04.163 [2024-07-25 17:13:56.438290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.475887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.475947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.163 [2024-07-25 17:13:56.475963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.565 ms 00:23:04.163 [2024-07-25 17:13:56.475973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.476115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.476134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.163 [2024-07-25 17:13:56.476146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:04.163 [2024-07-25 17:13:56.476156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.476709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.476725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.163 [2024-07-25 17:13:56.476737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:04.163 [2024-07-25 17:13:56.476748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.476907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.476924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.163 [2024-07-25 17:13:56.476935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:04.163 [2024-07-25 17:13:56.476946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.493779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.493827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.163 [2024-07-25 17:13:56.493842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.807 ms 00:23:04.163 [2024-07-25 17:13:56.493862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.508518] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:04.163 [2024-07-25 17:13:56.508569] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:04.163 [2024-07-25 17:13:56.508585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.508598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:04.163 [2024-07-25 17:13:56.508609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.598 ms 00:23:04.163 [2024-07-25 17:13:56.508619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.533575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.533625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:04.163 [2024-07-25 17:13:56.533640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:23:04.163 [2024-07-25 17:13:56.533650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.547156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.547203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:04.163 [2024-07-25 17:13:56.547216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.439 ms 00:23:04.163 [2024-07-25 17:13:56.547226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.560309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.560351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:04.163 [2024-07-25 17:13:56.560365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.002 ms 00:23:04.163 [2024-07-25 17:13:56.560376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.163 [2024-07-25 17:13:56.561143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.163 [2024-07-25 17:13:56.561172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:04.163 [2024-07-25 17:13:56.561186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:23:04.163 [2024-07-25 17:13:56.561197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.627834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.627915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:04.421 [2024-07-25 17:13:56.627944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.603 ms 00:23:04.421 [2024-07-25 17:13:56.627955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.638412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:04.421 [2024-07-25 17:13:56.656799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.656879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:04.421 [2024-07-25 17:13:56.656896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.655 ms 00:23:04.421 [2024-07-25 17:13:56.656906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.657060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.657080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:04.421 [2024-07-25 17:13:56.657093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:04.421 [2024-07-25 17:13:56.657104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.657173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.657189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:04.421 [2024-07-25 17:13:56.657200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:04.421 [2024-07-25 17:13:56.657210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.657241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.657259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:04.421 [2024-07-25 17:13:56.657270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:04.421 [2024-07-25 17:13:56.657280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.657316] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:04.421 [2024-07-25 17:13:56.657331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.657342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:04.421 [2024-07-25 17:13:56.657353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:04.421 [2024-07-25 17:13:56.657379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.683397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.683450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:04.421 [2024-07-25 17:13:56.683465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.990 ms 00:23:04.421 [2024-07-25 17:13:56.683475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.683593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.421 [2024-07-25 17:13:56.683611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:04.421 [2024-07-25 17:13:56.683623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:04.421 [2024-07-25 17:13:56.683633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.421 [2024-07-25 17:13:56.684985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.421 [2024-07-25 17:13:56.688550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.837 ms, result 0 00:23:04.421 [2024-07-25 17:13:56.689380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:04.421 [2024-07-25 17:13:56.703267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.888  Copying: 25/256 [MB] (25 MBps) Copying: 47/256 [MB] (22 MBps) Copying: 69/256 [MB] (22 MBps) Copying: 92/256 [MB] (22 MBps) Copying: 115/256 [MB] (22 MBps) Copying: 138/256 [MB] (22 MBps) Copying: 161/256 [MB] (22 MBps) Copying: 183/256 [MB] (22 MBps) Copying: 205/256 [MB] (22 MBps) Copying: 227/256 [MB] (21 MBps) Copying: 248/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-25 17:14:08.249517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.888 [2024-07-25 17:14:08.261897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.261955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.888 [2024-07-25 17:14:08.262014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:15.888 [2024-07-25 17:14:08.262029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.262068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:15.888 [2024-07-25 17:14:08.265480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.265527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.888 [2024-07-25 17:14:08.265567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:23:15.888 [2024-07-25 17:14:08.265578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.265883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.265900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.888 [2024-07-25 17:14:08.265912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:15.888 [2024-07-25 17:14:08.265927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.269030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.269075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.888 [2024-07-25 17:14:08.269110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:23:15.888 [2024-07-25 17:14:08.269121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.275258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.275304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.888 [2024-07-25 17:14:08.275333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:23:15.888 [2024-07-25 17:14:08.275344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.300772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.300827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.888 [2024-07-25 17:14:08.300858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.356 ms 00:23:15.888 [2024-07-25 17:14:08.300868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.317057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.317115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.888 [2024-07-25 17:14:08.317147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.146 ms 00:23:15.888 [2024-07-25 17:14:08.317164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.317304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.317337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.888 [2024-07-25 17:14:08.317366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:15.888 [2024-07-25 17:14:08.317407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.888 [2024-07-25 17:14:08.345433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.888 [2024-07-25 17:14:08.345485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:15.888 [2024-07-25 17:14:08.345516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.005 ms 00:23:15.888 [2024-07-25 17:14:08.345525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.147 [2024-07-25 17:14:08.372121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.147 [2024-07-25 17:14:08.372177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:16.147 [2024-07-25 17:14:08.372206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.538 ms 00:23:16.147 [2024-07-25 17:14:08.372217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.147 [2024-07-25 17:14:08.397015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.147 [2024-07-25 17:14:08.397052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.147 [2024-07-25 17:14:08.397097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.740 ms 00:23:16.147 [2024-07-25 17:14:08.397107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.147 [2024-07-25 17:14:08.420696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.148 [2024-07-25 17:14:08.420734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.148 [2024-07-25 17:14:08.420763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:23:16.148 [2024-07-25 17:14:08.420773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.148 [2024-07-25 17:14:08.420813] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.148 [2024-07-25 17:14:08.420840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.420987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.148 [2024-07-25 17:14:08.421838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.421987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.422008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.422019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.422030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.149 [2024-07-25 17:14:08.422050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.149 [2024-07-25 17:14:08.422061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 389a3003-122c-466b-a4fe-a4bfdc3017fc 00:23:16.149 [2024-07-25 17:14:08.422072] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:16.149 [2024-07-25 17:14:08.422081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:16.149 [2024-07-25 17:14:08.422104] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:16.149 [2024-07-25 17:14:08.422115] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:16.149 [2024-07-25 17:14:08.422124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.149 [2024-07-25 17:14:08.422134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.149 [2024-07-25 17:14:08.422144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.149 [2024-07-25 17:14:08.422153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.149 [2024-07-25 17:14:08.422162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.149 [2024-07-25 17:14:08.422173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.149 [2024-07-25 17:14:08.422183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.149 [2024-07-25 17:14:08.422213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:23:16.149 [2024-07-25 17:14:08.422222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.436358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.149 [2024-07-25 17:14:08.436393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.149 [2024-07-25 17:14:08.436422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.098 ms 00:23:16.149 [2024-07-25 17:14:08.436432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.436890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.149 [2024-07-25 17:14:08.436925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.149 [2024-07-25 17:14:08.436939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:23:16.149 [2024-07-25 17:14:08.436949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.470522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.149 [2024-07-25 17:14:08.470563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.149 [2024-07-25 17:14:08.470592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.149 [2024-07-25 17:14:08.470603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.470728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.149 [2024-07-25 17:14:08.470748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.149 [2024-07-25 17:14:08.470760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.149 [2024-07-25 17:14:08.470770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.470822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.149 [2024-07-25 17:14:08.470854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.149 [2024-07-25 17:14:08.470881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.149 [2024-07-25 17:14:08.470891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.470915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.149 [2024-07-25 17:14:08.470928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.149 [2024-07-25 17:14:08.470983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.149 [2024-07-25 17:14:08.470993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.149 [2024-07-25 17:14:08.553373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.149 [2024-07-25 17:14:08.553476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.149 [2024-07-25 17:14:08.553493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.149 [2024-07-25 17:14:08.553504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.621897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.621952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.408 [2024-07-25 17:14:08.621983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.408 [2024-07-25 17:14:08.622152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.408 [2024-07-25 17:14:08.622245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.408 [2024-07-25 17:14:08.622448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.408 [2024-07-25 17:14:08.622533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.408 [2024-07-25 17:14:08.622619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.408 [2024-07-25 17:14:08.622735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.408 [2024-07-25 17:14:08.622746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.408 [2024-07-25 17:14:08.622762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.408 [2024-07-25 17:14:08.622927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.026 ms, result 0 00:23:17.343 00:23:17.343 00:23:17.343 17:14:09 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:17.910 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:17.910 17:14:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80153 00:23:17.910 17:14:10 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80153 ']' 00:23:17.910 17:14:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80153 00:23:17.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80153) - No such process 00:23:17.910 Process with pid 80153 is not found 00:23:17.910 17:14:10 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 80153 is not found' 00:23:17.910 00:23:17.910 real 1m11.655s 00:23:17.910 user 1m37.048s 00:23:17.910 sys 0m7.141s 00:23:17.910 17:14:10 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.910 ************************************ 00:23:17.910 END TEST ftl_trim 00:23:17.910 ************************************ 00:23:17.910 17:14:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:17.910 17:14:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:17.910 17:14:10 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:17.910 17:14:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.910 17:14:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:17.910 ************************************ 00:23:17.910 START TEST ftl_restore 00:23:17.910 ************************************ 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:17.910 * Looking for test storage... 00:23:17.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DWCahVqLGb 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80420 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80420 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80420 ']' 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.910 17:14:10 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.910 17:14:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:18.169 [2024-07-25 17:14:10.454786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:18.169 [2024-07-25 17:14:10.455051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80420 ] 00:23:18.169 [2024-07-25 17:14:10.631150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.428 [2024-07-25 17:14:10.841513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:19.403 17:14:11 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:19.403 17:14:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:19.661 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:19.661 { 00:23:19.661 "name": "nvme0n1", 00:23:19.661 "aliases": [ 00:23:19.661 "e217e12e-65a4-4484-866d-c35b601c8c15" 00:23:19.661 ], 00:23:19.661 "product_name": "NVMe disk", 00:23:19.661 "block_size": 4096, 00:23:19.661 "num_blocks": 1310720, 00:23:19.661 "uuid": "e217e12e-65a4-4484-866d-c35b601c8c15", 00:23:19.661 "assigned_rate_limits": { 00:23:19.661 "rw_ios_per_sec": 0, 00:23:19.661 "rw_mbytes_per_sec": 0, 00:23:19.661 "r_mbytes_per_sec": 0, 00:23:19.661 "w_mbytes_per_sec": 0 00:23:19.661 }, 00:23:19.661 "claimed": true, 00:23:19.661 "claim_type": "read_many_write_one", 00:23:19.661 "zoned": false, 00:23:19.661 "supported_io_types": { 00:23:19.661 "read": true, 00:23:19.661 "write": true, 00:23:19.661 "unmap": true, 00:23:19.661 "flush": true, 00:23:19.661 "reset": true, 00:23:19.661 "nvme_admin": true, 00:23:19.661 "nvme_io": true, 00:23:19.661 "nvme_io_md": false, 00:23:19.661 "write_zeroes": true, 00:23:19.661 "zcopy": false, 00:23:19.661 "get_zone_info": false, 00:23:19.661 "zone_management": false, 00:23:19.661 "zone_append": false, 00:23:19.661 "compare": true, 00:23:19.661 "compare_and_write": false, 00:23:19.661 "abort": true, 00:23:19.661 "seek_hole": false, 00:23:19.661 "seek_data": false, 00:23:19.661 "copy": true, 00:23:19.661 "nvme_iov_md": false 00:23:19.661 }, 00:23:19.661 "driver_specific": { 00:23:19.661 "nvme": [ 00:23:19.661 { 00:23:19.661 "pci_address": "0000:00:11.0", 00:23:19.661 "trid": { 00:23:19.661 "trtype": "PCIe", 00:23:19.661 "traddr": "0000:00:11.0" 00:23:19.661 }, 00:23:19.661 "ctrlr_data": { 00:23:19.661 "cntlid": 0, 00:23:19.661 "vendor_id": "0x1b36", 00:23:19.661 "model_number": "QEMU NVMe Ctrl", 00:23:19.661 "serial_number": "12341", 00:23:19.661 "firmware_revision": "8.0.0", 00:23:19.661 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:19.661 "oacs": { 00:23:19.661 "security": 0, 00:23:19.661 "format": 1, 00:23:19.661 "firmware": 0, 00:23:19.661 "ns_manage": 1 00:23:19.661 }, 00:23:19.661 "multi_ctrlr": false, 00:23:19.661 "ana_reporting": false 00:23:19.661 }, 00:23:19.661 "vs": { 00:23:19.661 "nvme_version": "1.4" 00:23:19.661 }, 00:23:19.661 "ns_data": { 00:23:19.661 "id": 1, 00:23:19.661 "can_share": false 00:23:19.661 } 00:23:19.661 } 00:23:19.661 ], 00:23:19.661 "mp_policy": "active_passive" 00:23:19.661 } 00:23:19.661 } 00:23:19.661 ]' 00:23:19.661 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:19.661 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:19.661 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:19.919 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:19.919 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:19.919 17:14:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:23:19.919 17:14:12 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:19.919 17:14:12 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:19.919 17:14:12 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:19.919 17:14:12 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:19.919 17:14:12 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:20.178 17:14:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=443b97c8-ffdf-44dc-9ed6-9ab33881ce65 00:23:20.178 17:14:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:20.178 17:14:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 443b97c8-ffdf-44dc-9ed6-9ab33881ce65 00:23:20.436 17:14:12 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:20.436 17:14:12 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=82dfaf6f-9174-4fd9-a39d-971ebfe53599 00:23:20.436 17:14:12 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 82dfaf6f-9174-4fd9-a39d-971ebfe53599 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:20.695 17:14:13 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.695 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.695 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:20.695 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:20.695 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:20.695 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:20.954 { 00:23:20.954 "name": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:20.954 "aliases": [ 00:23:20.954 "lvs/nvme0n1p0" 00:23:20.954 ], 00:23:20.954 "product_name": "Logical Volume", 00:23:20.954 "block_size": 4096, 00:23:20.954 "num_blocks": 26476544, 00:23:20.954 "uuid": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:20.954 "assigned_rate_limits": { 00:23:20.954 "rw_ios_per_sec": 0, 00:23:20.954 "rw_mbytes_per_sec": 0, 00:23:20.954 "r_mbytes_per_sec": 0, 00:23:20.954 "w_mbytes_per_sec": 0 00:23:20.954 }, 00:23:20.954 "claimed": false, 00:23:20.954 "zoned": false, 00:23:20.954 "supported_io_types": { 00:23:20.954 "read": true, 00:23:20.954 "write": true, 00:23:20.954 "unmap": true, 00:23:20.954 "flush": false, 00:23:20.954 "reset": true, 00:23:20.954 "nvme_admin": false, 00:23:20.954 "nvme_io": false, 00:23:20.954 "nvme_io_md": false, 00:23:20.954 "write_zeroes": true, 00:23:20.954 "zcopy": false, 00:23:20.954 "get_zone_info": false, 00:23:20.954 "zone_management": false, 00:23:20.954 "zone_append": false, 00:23:20.954 "compare": false, 00:23:20.954 "compare_and_write": false, 00:23:20.954 "abort": false, 00:23:20.954 "seek_hole": true, 00:23:20.954 "seek_data": true, 00:23:20.954 "copy": false, 00:23:20.954 "nvme_iov_md": false 00:23:20.954 }, 00:23:20.954 "driver_specific": { 00:23:20.954 "lvol": { 00:23:20.954 "lvol_store_uuid": "82dfaf6f-9174-4fd9-a39d-971ebfe53599", 00:23:20.954 "base_bdev": "nvme0n1", 00:23:20.954 "thin_provision": true, 00:23:20.954 "num_allocated_clusters": 0, 00:23:20.954 "snapshot": false, 00:23:20.954 "clone": false, 00:23:20.954 "esnap_clone": false 00:23:20.954 } 00:23:20.954 } 00:23:20.954 } 00:23:20.954 ]' 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:20.954 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:20.954 17:14:13 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:20.954 17:14:13 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:20.954 17:14:13 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:21.522 17:14:13 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:21.522 17:14:13 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:21.522 17:14:13 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=124bc423-766e-4061-8f9d-9e59eb06b934 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:21.522 { 00:23:21.522 "name": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:21.522 "aliases": [ 00:23:21.522 "lvs/nvme0n1p0" 00:23:21.522 ], 00:23:21.522 "product_name": "Logical Volume", 00:23:21.522 "block_size": 4096, 00:23:21.522 "num_blocks": 26476544, 00:23:21.522 "uuid": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:21.522 "assigned_rate_limits": { 00:23:21.522 "rw_ios_per_sec": 0, 00:23:21.522 "rw_mbytes_per_sec": 0, 00:23:21.522 "r_mbytes_per_sec": 0, 00:23:21.522 "w_mbytes_per_sec": 0 00:23:21.522 }, 00:23:21.522 "claimed": false, 00:23:21.522 "zoned": false, 00:23:21.522 "supported_io_types": { 00:23:21.522 "read": true, 00:23:21.522 "write": true, 00:23:21.522 "unmap": true, 00:23:21.522 "flush": false, 00:23:21.522 "reset": true, 00:23:21.522 "nvme_admin": false, 00:23:21.522 "nvme_io": false, 00:23:21.522 "nvme_io_md": false, 00:23:21.522 "write_zeroes": true, 00:23:21.522 "zcopy": false, 00:23:21.522 "get_zone_info": false, 00:23:21.522 "zone_management": false, 00:23:21.522 "zone_append": false, 00:23:21.522 "compare": false, 00:23:21.522 "compare_and_write": false, 00:23:21.522 "abort": false, 00:23:21.522 "seek_hole": true, 00:23:21.522 "seek_data": true, 00:23:21.522 "copy": false, 00:23:21.522 "nvme_iov_md": false 00:23:21.522 }, 00:23:21.522 "driver_specific": { 00:23:21.522 "lvol": { 00:23:21.522 "lvol_store_uuid": "82dfaf6f-9174-4fd9-a39d-971ebfe53599", 00:23:21.522 "base_bdev": "nvme0n1", 00:23:21.522 "thin_provision": true, 00:23:21.522 "num_allocated_clusters": 0, 00:23:21.522 "snapshot": false, 00:23:21.522 "clone": false, 00:23:21.522 "esnap_clone": false 00:23:21.522 } 00:23:21.522 } 00:23:21.522 } 00:23:21.522 ]' 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:21.522 17:14:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:21.781 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:21.781 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:21.781 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:21.781 17:14:14 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:21.781 17:14:14 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:22.039 17:14:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:22.039 17:14:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:22.039 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=124bc423-766e-4061-8f9d-9e59eb06b934 00:23:22.039 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:22.039 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:22.039 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:22.039 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 124bc423-766e-4061-8f9d-9e59eb06b934 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:22.298 { 00:23:22.298 "name": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:22.298 "aliases": [ 00:23:22.298 "lvs/nvme0n1p0" 00:23:22.298 ], 00:23:22.298 "product_name": "Logical Volume", 00:23:22.298 "block_size": 4096, 00:23:22.298 "num_blocks": 26476544, 00:23:22.298 "uuid": "124bc423-766e-4061-8f9d-9e59eb06b934", 00:23:22.298 "assigned_rate_limits": { 00:23:22.298 "rw_ios_per_sec": 0, 00:23:22.298 "rw_mbytes_per_sec": 0, 00:23:22.298 "r_mbytes_per_sec": 0, 00:23:22.298 "w_mbytes_per_sec": 0 00:23:22.298 }, 00:23:22.298 "claimed": false, 00:23:22.298 "zoned": false, 00:23:22.298 "supported_io_types": { 00:23:22.298 "read": true, 00:23:22.298 "write": true, 00:23:22.298 "unmap": true, 00:23:22.298 "flush": false, 00:23:22.298 "reset": true, 00:23:22.298 "nvme_admin": false, 00:23:22.298 "nvme_io": false, 00:23:22.298 "nvme_io_md": false, 00:23:22.298 "write_zeroes": true, 00:23:22.298 "zcopy": false, 00:23:22.298 "get_zone_info": false, 00:23:22.298 "zone_management": false, 00:23:22.298 "zone_append": false, 00:23:22.298 "compare": false, 00:23:22.298 "compare_and_write": false, 00:23:22.298 "abort": false, 00:23:22.298 "seek_hole": true, 00:23:22.298 "seek_data": true, 00:23:22.298 "copy": false, 00:23:22.298 "nvme_iov_md": false 00:23:22.298 }, 00:23:22.298 "driver_specific": { 00:23:22.298 "lvol": { 00:23:22.298 "lvol_store_uuid": "82dfaf6f-9174-4fd9-a39d-971ebfe53599", 00:23:22.298 "base_bdev": "nvme0n1", 00:23:22.298 "thin_provision": true, 00:23:22.298 "num_allocated_clusters": 0, 00:23:22.298 "snapshot": false, 00:23:22.298 "clone": false, 00:23:22.298 "esnap_clone": false 00:23:22.298 } 00:23:22.298 } 00:23:22.298 } 00:23:22.298 ]' 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:22.298 17:14:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 124bc423-766e-4061-8f9d-9e59eb06b934 --l2p_dram_limit 10' 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:22.298 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:22.298 17:14:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 124bc423-766e-4061-8f9d-9e59eb06b934 --l2p_dram_limit 10 -c nvc0n1p0 00:23:22.558 [2024-07-25 17:14:14.800886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.800964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.558 [2024-07-25 17:14:14.801016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:22.558 [2024-07-25 17:14:14.801032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.801121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.801140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.558 [2024-07-25 17:14:14.801153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:22.558 [2024-07-25 17:14:14.801165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.801192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.558 [2024-07-25 17:14:14.802297] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.558 [2024-07-25 17:14:14.802344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.802379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.558 [2024-07-25 17:14:14.802391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:23:22.558 [2024-07-25 17:14:14.802404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.802657] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:23:22.558 [2024-07-25 17:14:14.805180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.805217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:22.558 [2024-07-25 17:14:14.805235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:22.558 [2024-07-25 17:14:14.805247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.818404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.818462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.558 [2024-07-25 17:14:14.818497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.083 ms 00:23:22.558 [2024-07-25 17:14:14.818509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.818624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.818651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.558 [2024-07-25 17:14:14.818666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:22.558 [2024-07-25 17:14:14.818682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.818797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.818814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.558 [2024-07-25 17:14:14.818832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:22.558 [2024-07-25 17:14:14.818843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.818877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.558 [2024-07-25 17:14:14.824342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.824403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.558 [2024-07-25 17:14:14.824435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.476 ms 00:23:22.558 [2024-07-25 17:14:14.824447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.824490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.824517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.558 [2024-07-25 17:14:14.824528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:22.558 [2024-07-25 17:14:14.824547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.824608] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:22.558 [2024-07-25 17:14:14.824801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.558 [2024-07-25 17:14:14.824820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.558 [2024-07-25 17:14:14.824840] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:22.558 [2024-07-25 17:14:14.824854] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.558 [2024-07-25 17:14:14.824869] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.558 [2024-07-25 17:14:14.824880] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:22.558 [2024-07-25 17:14:14.824898] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.558 [2024-07-25 17:14:14.824908] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.558 [2024-07-25 17:14:14.824921] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.558 [2024-07-25 17:14:14.824932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.824955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.558 [2024-07-25 17:14:14.824966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:23:22.558 [2024-07-25 17:14:14.824979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.825094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.558 [2024-07-25 17:14:14.825115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.558 [2024-07-25 17:14:14.825126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:22.558 [2024-07-25 17:14:14.825142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.558 [2024-07-25 17:14:14.825260] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.558 [2024-07-25 17:14:14.825282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.558 [2024-07-25 17:14:14.825305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.558 [2024-07-25 17:14:14.825320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.558 [2024-07-25 17:14:14.825343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:22.558 [2024-07-25 17:14:14.825364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.558 [2024-07-25 17:14:14.825374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.558 [2024-07-25 17:14:14.825398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.558 [2024-07-25 17:14:14.825410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:22.558 [2024-07-25 17:14:14.825420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.558 [2024-07-25 17:14:14.825432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.558 [2024-07-25 17:14:14.825458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:22.558 [2024-07-25 17:14:14.825470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.558 [2024-07-25 17:14:14.825495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:22.558 [2024-07-25 17:14:14.825506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.558 [2024-07-25 17:14:14.825528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:22.558 [2024-07-25 17:14:14.825547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.558 [2024-07-25 17:14:14.825558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.559 [2024-07-25 17:14:14.825578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.559 [2024-07-25 17:14:14.825616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.559 [2024-07-25 17:14:14.825626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.559 [2024-07-25 17:14:14.825648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.559 [2024-07-25 17:14:14.825660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.559 [2024-07-25 17:14:14.825682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.559 [2024-07-25 17:14:14.825692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.559 [2024-07-25 17:14:14.825734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.559 [2024-07-25 17:14:14.825748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:22.559 [2024-07-25 17:14:14.825758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.559 [2024-07-25 17:14:14.825771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.559 [2024-07-25 17:14:14.825781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:22.559 [2024-07-25 17:14:14.825793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.559 [2024-07-25 17:14:14.825816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:22.559 [2024-07-25 17:14:14.825827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825839] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.559 [2024-07-25 17:14:14.825851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.559 [2024-07-25 17:14:14.825864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.559 [2024-07-25 17:14:14.825875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.559 [2024-07-25 17:14:14.825888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.559 [2024-07-25 17:14:14.825899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.559 [2024-07-25 17:14:14.825914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.559 [2024-07-25 17:14:14.825924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.559 [2024-07-25 17:14:14.825937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.559 [2024-07-25 17:14:14.825948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.559 [2024-07-25 17:14:14.825980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.559 [2024-07-25 17:14:14.825997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:22.559 [2024-07-25 17:14:14.826024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:22.559 [2024-07-25 17:14:14.826037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:22.559 [2024-07-25 17:14:14.826048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:22.559 [2024-07-25 17:14:14.826079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:22.559 [2024-07-25 17:14:14.826090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:22.559 [2024-07-25 17:14:14.826104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:22.559 [2024-07-25 17:14:14.826132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:22.559 [2024-07-25 17:14:14.826144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:22.559 [2024-07-25 17:14:14.826155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:22.559 [2024-07-25 17:14:14.826216] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.559 [2024-07-25 17:14:14.826228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.559 [2024-07-25 17:14:14.826269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.559 [2024-07-25 17:14:14.826282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.559 [2024-07-25 17:14:14.826292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.559 [2024-07-25 17:14:14.826306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.559 [2024-07-25 17:14:14.826318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.559 [2024-07-25 17:14:14.826332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:23:22.559 [2024-07-25 17:14:14.826342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.559 [2024-07-25 17:14:14.826399] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:22.559 [2024-07-25 17:14:14.826415] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:25.846 [2024-07-25 17:14:17.701423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.701526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:25.846 [2024-07-25 17:14:17.701567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2875.037 ms 00:23:25.846 [2024-07-25 17:14:17.701579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.740029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.740115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.846 [2024-07-25 17:14:17.740155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.079 ms 00:23:25.846 [2024-07-25 17:14:17.740168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.740353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.740371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:25.846 [2024-07-25 17:14:17.740390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:25.846 [2024-07-25 17:14:17.740401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.778418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.778499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.846 [2024-07-25 17:14:17.778537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.906 ms 00:23:25.846 [2024-07-25 17:14:17.778548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.778614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.778637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.846 [2024-07-25 17:14:17.778676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:25.846 [2024-07-25 17:14:17.778687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.779363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.779406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.846 [2024-07-25 17:14:17.779422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:23:25.846 [2024-07-25 17:14:17.779434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.779585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.779604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.846 [2024-07-25 17:14:17.779618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:23:25.846 [2024-07-25 17:14:17.779628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.797291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.797357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.846 [2024-07-25 17:14:17.797395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.632 ms 00:23:25.846 [2024-07-25 17:14:17.797406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.809386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:25.846 [2024-07-25 17:14:17.813400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.813471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:25.846 [2024-07-25 17:14:17.813488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.884 ms 00:23:25.846 [2024-07-25 17:14:17.813501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.901369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.901496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:25.846 [2024-07-25 17:14:17.901517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.827 ms 00:23:25.846 [2024-07-25 17:14:17.901532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.901791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.901814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:25.846 [2024-07-25 17:14:17.901827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:23:25.846 [2024-07-25 17:14:17.901844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.931232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.931319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:25.846 [2024-07-25 17:14:17.931339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.293 ms 00:23:25.846 [2024-07-25 17:14:17.931357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.957635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.957718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:25.846 [2024-07-25 17:14:17.957737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:23:25.846 [2024-07-25 17:14:17.957751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:17.958660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:17.958726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:25.846 [2024-07-25 17:14:17.958745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:23:25.846 [2024-07-25 17:14:17.958758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.044532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.044633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:25.846 [2024-07-25 17:14:18.044654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.707 ms 00:23:25.846 [2024-07-25 17:14:18.044672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.073871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.073961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:25.846 [2024-07-25 17:14:18.073981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.146 ms 00:23:25.846 [2024-07-25 17:14:18.074020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.101239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.101329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:25.846 [2024-07-25 17:14:18.101347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.164 ms 00:23:25.846 [2024-07-25 17:14:18.101361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.129597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.129685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:25.846 [2024-07-25 17:14:18.129703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.187 ms 00:23:25.846 [2024-07-25 17:14:18.129718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.129776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.129797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:25.846 [2024-07-25 17:14:18.129810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:25.846 [2024-07-25 17:14:18.129826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.129971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.846 [2024-07-25 17:14:18.130013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:25.846 [2024-07-25 17:14:18.130043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:25.846 [2024-07-25 17:14:18.130059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.846 [2024-07-25 17:14:18.131470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3330.013 ms, result 0 00:23:25.846 { 00:23:25.846 "name": "ftl0", 00:23:25.846 "uuid": "94fc35b7-b5e7-46b8-bc04-5da701b70015" 00:23:25.846 } 00:23:25.846 17:14:18 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:25.846 17:14:18 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:26.105 17:14:18 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:26.105 17:14:18 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:26.365 [2024-07-25 17:14:18.678598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.678721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:26.365 [2024-07-25 17:14:18.678763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:26.365 [2024-07-25 17:14:18.678776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.678814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:26.365 [2024-07-25 17:14:18.682459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.682508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:26.365 [2024-07-25 17:14:18.682539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:23:26.365 [2024-07-25 17:14:18.682552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.682918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.682947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:26.365 [2024-07-25 17:14:18.682986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:26.365 [2024-07-25 17:14:18.682999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.685849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.685896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:26.365 [2024-07-25 17:14:18.685926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:23:26.365 [2024-07-25 17:14:18.685938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.691525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.691581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:26.365 [2024-07-25 17:14:18.691610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.565 ms 00:23:26.365 [2024-07-25 17:14:18.691623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.719025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.719111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:26.365 [2024-07-25 17:14:18.719129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.340 ms 00:23:26.365 [2024-07-25 17:14:18.719142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.736948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.737060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:26.365 [2024-07-25 17:14:18.737079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.757 ms 00:23:26.365 [2024-07-25 17:14:18.737094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.737281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.737322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:26.365 [2024-07-25 17:14:18.737336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:23:26.365 [2024-07-25 17:14:18.737349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.764116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.764185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:26.365 [2024-07-25 17:14:18.764218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.738 ms 00:23:26.365 [2024-07-25 17:14:18.764231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.789901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.789981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:26.365 [2024-07-25 17:14:18.790005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.625 ms 00:23:26.365 [2024-07-25 17:14:18.790020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.365 [2024-07-25 17:14:18.814605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.365 [2024-07-25 17:14:18.814708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:26.365 [2024-07-25 17:14:18.814732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:23:26.365 [2024-07-25 17:14:18.814746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.624 [2024-07-25 17:14:18.839084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.624 [2024-07-25 17:14:18.839168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:26.624 [2024-07-25 17:14:18.839184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.238 ms 00:23:26.624 [2024-07-25 17:14:18.839197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.624 [2024-07-25 17:14:18.839242] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:26.624 [2024-07-25 17:14:18.839269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:26.624 [2024-07-25 17:14:18.839747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.839987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:26.625 [2024-07-25 17:14:18.840702] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:26.625 [2024-07-25 17:14:18.840714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:23:26.625 [2024-07-25 17:14:18.840727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:26.625 [2024-07-25 17:14:18.840737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:26.625 [2024-07-25 17:14:18.840751] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:26.625 [2024-07-25 17:14:18.840762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:26.625 [2024-07-25 17:14:18.840790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:26.625 [2024-07-25 17:14:18.840801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:26.625 [2024-07-25 17:14:18.840813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:26.625 [2024-07-25 17:14:18.840823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:26.625 [2024-07-25 17:14:18.840834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:26.625 [2024-07-25 17:14:18.840844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.625 [2024-07-25 17:14:18.840857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:26.625 [2024-07-25 17:14:18.840869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:23:26.625 [2024-07-25 17:14:18.840884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.855445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.625 [2024-07-25 17:14:18.855518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:26.625 [2024-07-25 17:14:18.855533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.517 ms 00:23:26.625 [2024-07-25 17:14:18.855546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.856065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.625 [2024-07-25 17:14:18.856097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:26.625 [2024-07-25 17:14:18.856117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:23:26.625 [2024-07-25 17:14:18.856146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.900060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:18.900170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.625 [2024-07-25 17:14:18.900190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:18.900204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.900297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:18.900315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.625 [2024-07-25 17:14:18.900330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:18.900343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.900500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:18.900539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.625 [2024-07-25 17:14:18.900552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:18.900565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.900591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:18.900609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.625 [2024-07-25 17:14:18.900621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:18.900638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:18.993593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:18.993713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.625 [2024-07-25 17:14:18.993732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:18.993746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.070747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.070864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:26.625 [2024-07-25 17:14:19.070887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.070902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:26.625 [2024-07-25 17:14:19.071171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:26.625 [2024-07-25 17:14:19.071304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:26.625 [2024-07-25 17:14:19.071495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:26.625 [2024-07-25 17:14:19.071601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:26.625 [2024-07-25 17:14:19.071693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.071791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:26.625 [2024-07-25 17:14:19.071813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:26.625 [2024-07-25 17:14:19.071825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:26.625 [2024-07-25 17:14:19.071838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.625 [2024-07-25 17:14:19.072002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.362 ms, result 0 00:23:26.625 true 00:23:26.884 17:14:19 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80420 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80420 ']' 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80420 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80420 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.884 killing process with pid 80420 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.884 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80420' 00:23:26.885 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80420 00:23:26.885 17:14:19 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80420 00:23:32.154 17:14:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:35.466 262144+0 records in 00:23:35.466 262144+0 records out 00:23:35.466 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.14087 s, 259 MB/s 00:23:35.466 17:14:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:37.367 17:14:29 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:37.367 [2024-07-25 17:14:29.627672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:37.367 [2024-07-25 17:14:29.627853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80653 ] 00:23:37.367 [2024-07-25 17:14:29.793409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.625 [2024-07-25 17:14:30.049393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.193 [2024-07-25 17:14:30.354181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.193 [2024-07-25 17:14:30.354302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.193 [2024-07-25 17:14:30.514237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.514305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.193 [2024-07-25 17:14:30.514342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.193 [2024-07-25 17:14:30.514353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.193 [2024-07-25 17:14:30.514411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.514428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.193 [2024-07-25 17:14:30.514440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:38.193 [2024-07-25 17:14:30.514455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.193 [2024-07-25 17:14:30.514487] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.193 [2024-07-25 17:14:30.515405] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.193 [2024-07-25 17:14:30.515448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.515462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.193 [2024-07-25 17:14:30.515475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:23:38.193 [2024-07-25 17:14:30.515486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.193 [2024-07-25 17:14:30.517502] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.193 [2024-07-25 17:14:30.532199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.532260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.193 [2024-07-25 17:14:30.532293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.697 ms 00:23:38.193 [2024-07-25 17:14:30.532306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.193 [2024-07-25 17:14:30.532390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.532412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.193 [2024-07-25 17:14:30.532423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:38.193 [2024-07-25 17:14:30.532434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.193 [2024-07-25 17:14:30.541463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.193 [2024-07-25 17:14:30.541521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.193 [2024-07-25 17:14:30.541551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.912 ms 00:23:38.193 [2024-07-25 17:14:30.541563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.541673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.541692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.194 [2024-07-25 17:14:30.541704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:38.194 [2024-07-25 17:14:30.541715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.541788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.541806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.194 [2024-07-25 17:14:30.541818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:38.194 [2024-07-25 17:14:30.541830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.541862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.194 [2024-07-25 17:14:30.546434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.546486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.194 [2024-07-25 17:14:30.546515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.580 ms 00:23:38.194 [2024-07-25 17:14:30.546526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.546569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.546584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.194 [2024-07-25 17:14:30.546596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:38.194 [2024-07-25 17:14:30.546607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.546726] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.194 [2024-07-25 17:14:30.546762] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.194 [2024-07-25 17:14:30.546820] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.194 [2024-07-25 17:14:30.546844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:38.194 [2024-07-25 17:14:30.546946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.194 [2024-07-25 17:14:30.546962] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.194 [2024-07-25 17:14:30.546992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:38.194 [2024-07-25 17:14:30.547022] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547035] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547074] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:38.194 [2024-07-25 17:14:30.547107] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.194 [2024-07-25 17:14:30.547119] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.194 [2024-07-25 17:14:30.547130] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.194 [2024-07-25 17:14:30.547142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.547173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.194 [2024-07-25 17:14:30.547185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:23:38.194 [2024-07-25 17:14:30.547196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.547301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.194 [2024-07-25 17:14:30.547316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.194 [2024-07-25 17:14:30.547327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:38.194 [2024-07-25 17:14:30.547338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.194 [2024-07-25 17:14:30.547453] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.194 [2024-07-25 17:14:30.547470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.194 [2024-07-25 17:14:30.547486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.194 [2024-07-25 17:14:30.547519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547530] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.194 [2024-07-25 17:14:30.547551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.194 [2024-07-25 17:14:30.547573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.194 [2024-07-25 17:14:30.547583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:38.194 [2024-07-25 17:14:30.547594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.194 [2024-07-25 17:14:30.547605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.194 [2024-07-25 17:14:30.547615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:38.194 [2024-07-25 17:14:30.547626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.194 [2024-07-25 17:14:30.547648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.194 [2024-07-25 17:14:30.547695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.194 [2024-07-25 17:14:30.547728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.194 [2024-07-25 17:14:30.547761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.194 [2024-07-25 17:14:30.547794] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.194 [2024-07-25 17:14:30.547827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.194 [2024-07-25 17:14:30.547849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.194 [2024-07-25 17:14:30.547860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:38.194 [2024-07-25 17:14:30.547871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.194 [2024-07-25 17:14:30.547882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.194 [2024-07-25 17:14:30.547894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:38.194 [2024-07-25 17:14:30.547905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.194 [2024-07-25 17:14:30.547941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:38.194 [2024-07-25 17:14:30.547951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.547962] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.194 [2024-07-25 17:14:30.547973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.194 [2024-07-25 17:14:30.547983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.194 [2024-07-25 17:14:30.547994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.194 [2024-07-25 17:14:30.548005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.194 [2024-07-25 17:14:30.548017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.194 [2024-07-25 17:14:30.548027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.194 [2024-07-25 17:14:30.548038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.194 [2024-07-25 17:14:30.548049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.194 [2024-07-25 17:14:30.548076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.194 [2024-07-25 17:14:30.548104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.194 [2024-07-25 17:14:30.548119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.194 [2024-07-25 17:14:30.548131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:38.194 [2024-07-25 17:14:30.548143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:38.194 [2024-07-25 17:14:30.548154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:38.194 [2024-07-25 17:14:30.548165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:38.194 [2024-07-25 17:14:30.548176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:38.194 [2024-07-25 17:14:30.548186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:38.194 [2024-07-25 17:14:30.548197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:38.195 [2024-07-25 17:14:30.548208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:38.195 [2024-07-25 17:14:30.548218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:38.195 [2024-07-25 17:14:30.548229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:38.195 [2024-07-25 17:14:30.548282] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.195 [2024-07-25 17:14:30.548307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.195 [2024-07-25 17:14:30.548335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.195 [2024-07-25 17:14:30.548347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.195 [2024-07-25 17:14:30.548358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.195 [2024-07-25 17:14:30.548370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.548382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.195 [2024-07-25 17:14:30.548393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:23:38.195 [2024-07-25 17:14:30.548403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.602289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.602364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.195 [2024-07-25 17:14:30.602399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.820 ms 00:23:38.195 [2024-07-25 17:14:30.602411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.602525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.602542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.195 [2024-07-25 17:14:30.602554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:38.195 [2024-07-25 17:14:30.602565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.638708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.638772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.195 [2024-07-25 17:14:30.638804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.000 ms 00:23:38.195 [2024-07-25 17:14:30.638816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.638867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.638884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.195 [2024-07-25 17:14:30.638896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.195 [2024-07-25 17:14:30.638913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.639646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.639665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.195 [2024-07-25 17:14:30.639694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:23:38.195 [2024-07-25 17:14:30.639704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.639868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.639888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.195 [2024-07-25 17:14:30.639900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:23:38.195 [2024-07-25 17:14:30.639911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.195 [2024-07-25 17:14:30.658683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.195 [2024-07-25 17:14:30.658725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.195 [2024-07-25 17:14:30.658742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.740 ms 00:23:38.195 [2024-07-25 17:14:30.658759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.675759] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:38.454 [2024-07-25 17:14:30.675822] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:38.454 [2024-07-25 17:14:30.675854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.675865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:38.454 [2024-07-25 17:14:30.675877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.946 ms 00:23:38.454 [2024-07-25 17:14:30.675888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.702189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.702263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:38.454 [2024-07-25 17:14:30.702302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.258 ms 00:23:38.454 [2024-07-25 17:14:30.702313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.715247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.715304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:38.454 [2024-07-25 17:14:30.715335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.890 ms 00:23:38.454 [2024-07-25 17:14:30.715345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.727793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.727849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:38.454 [2024-07-25 17:14:30.727879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.406 ms 00:23:38.454 [2024-07-25 17:14:30.727890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.728651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.728688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.454 [2024-07-25 17:14:30.728702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:23:38.454 [2024-07-25 17:14:30.728713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.794595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.794712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:38.454 [2024-07-25 17:14:30.794749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.858 ms 00:23:38.454 [2024-07-25 17:14:30.794762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.805042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:38.454 [2024-07-25 17:14:30.807300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.807368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.454 [2024-07-25 17:14:30.807399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:23:38.454 [2024-07-25 17:14:30.807410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.807511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.807531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:38.454 [2024-07-25 17:14:30.807544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:38.454 [2024-07-25 17:14:30.807555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.807681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.807704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.454 [2024-07-25 17:14:30.807716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:38.454 [2024-07-25 17:14:30.807728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.807776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.807790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.454 [2024-07-25 17:14:30.807802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:38.454 [2024-07-25 17:14:30.807813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.807851] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:38.454 [2024-07-25 17:14:30.807868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.807879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:38.454 [2024-07-25 17:14:30.807895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:38.454 [2024-07-25 17:14:30.807906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.833817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.833876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.454 [2024-07-25 17:14:30.833907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.887 ms 00:23:38.454 [2024-07-25 17:14:30.833919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.834013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.454 [2024-07-25 17:14:30.834037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.454 [2024-07-25 17:14:30.834049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:38.454 [2024-07-25 17:14:30.834060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.454 [2024-07-25 17:14:30.835744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.920 ms, result 0 00:24:24.028  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (21 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (22 MBps) Copying: 111/1024 [MB] (22 MBps) Copying: 133/1024 [MB] (22 MBps) Copying: 155/1024 [MB] (21 MBps) Copying: 177/1024 [MB] (21 MBps) Copying: 200/1024 [MB] (22 MBps) Copying: 222/1024 [MB] (22 MBps) Copying: 245/1024 [MB] (22 MBps) Copying: 268/1024 [MB] (23 MBps) Copying: 291/1024 [MB] (22 MBps) Copying: 314/1024 [MB] (23 MBps) Copying: 337/1024 [MB] (23 MBps) Copying: 360/1024 [MB] (22 MBps) Copying: 382/1024 [MB] (22 MBps) Copying: 405/1024 [MB] (22 MBps) Copying: 427/1024 [MB] (22 MBps) Copying: 450/1024 [MB] (22 MBps) Copying: 472/1024 [MB] (22 MBps) Copying: 496/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 589/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 635/1024 [MB] (22 MBps) Copying: 658/1024 [MB] (23 MBps) Copying: 681/1024 [MB] (22 MBps) Copying: 704/1024 [MB] (22 MBps) Copying: 727/1024 [MB] (22 MBps) Copying: 749/1024 [MB] (21 MBps) Copying: 771/1024 [MB] (22 MBps) Copying: 792/1024 [MB] (21 MBps) Copying: 814/1024 [MB] (21 MBps) Copying: 836/1024 [MB] (21 MBps) Copying: 857/1024 [MB] (21 MBps) Copying: 880/1024 [MB] (22 MBps) Copying: 902/1024 [MB] (22 MBps) Copying: 923/1024 [MB] (21 MBps) Copying: 946/1024 [MB] (22 MBps) Copying: 968/1024 [MB] (22 MBps) Copying: 991/1024 [MB] (22 MBps) Copying: 1014/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-25 17:15:16.298605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.298715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.028 [2024-07-25 17:15:16.298755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:24.028 [2024-07-25 17:15:16.298768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.298799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.028 [2024-07-25 17:15:16.302774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.302808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.028 [2024-07-25 17:15:16.302823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:24:24.028 [2024-07-25 17:15:16.302835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.305156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.305218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.028 [2024-07-25 17:15:16.305233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:24:24.028 [2024-07-25 17:15:16.305245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.320908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.321005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.028 [2024-07-25 17:15:16.321022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.642 ms 00:24:24.028 [2024-07-25 17:15:16.321034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.326522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.326583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.028 [2024-07-25 17:15:16.326612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.451 ms 00:24:24.028 [2024-07-25 17:15:16.326623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.354498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.354568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:24.028 [2024-07-25 17:15:16.354601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.801 ms 00:24:24.028 [2024-07-25 17:15:16.354612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.373700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.373760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:24.028 [2024-07-25 17:15:16.373799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:24:24.028 [2024-07-25 17:15:16.373811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.373948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.373968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.028 [2024-07-25 17:15:16.374026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:24.028 [2024-07-25 17:15:16.374046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.402350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.402406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:24.028 [2024-07-25 17:15:16.402437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.282 ms 00:24:24.028 [2024-07-25 17:15:16.402447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.429537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.429593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:24.028 [2024-07-25 17:15:16.429624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.048 ms 00:24:24.028 [2024-07-25 17:15:16.429635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.028 [2024-07-25 17:15:16.455834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.028 [2024-07-25 17:15:16.455904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:24.029 [2024-07-25 17:15:16.455949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.160 ms 00:24:24.029 [2024-07-25 17:15:16.455974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.029 [2024-07-25 17:15:16.481918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.029 [2024-07-25 17:15:16.482011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:24.029 [2024-07-25 17:15:16.482027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:24:24.029 [2024-07-25 17:15:16.482038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.029 [2024-07-25 17:15:16.482079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:24.029 [2024-07-25 17:15:16.482100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.482989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:24.029 [2024-07-25 17:15:16.483168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:24.030 [2024-07-25 17:15:16.483418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:24.030 [2024-07-25 17:15:16.483429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:24:24.030 [2024-07-25 17:15:16.483441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:24.030 [2024-07-25 17:15:16.483458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:24.030 [2024-07-25 17:15:16.483468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:24.030 [2024-07-25 17:15:16.483479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:24.030 [2024-07-25 17:15:16.483489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:24.030 [2024-07-25 17:15:16.483501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:24.030 [2024-07-25 17:15:16.483511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:24.030 [2024-07-25 17:15:16.483531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:24.030 [2024-07-25 17:15:16.483541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:24.030 [2024-07-25 17:15:16.483559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.030 [2024-07-25 17:15:16.483570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:24.030 [2024-07-25 17:15:16.483582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:24:24.030 [2024-07-25 17:15:16.483597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.498564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.288 [2024-07-25 17:15:16.498616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.288 [2024-07-25 17:15:16.498670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.928 ms 00:24:24.288 [2024-07-25 17:15:16.498696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.499272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.288 [2024-07-25 17:15:16.499327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.288 [2024-07-25 17:15:16.499358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:24:24.288 [2024-07-25 17:15:16.499369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.533049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.533108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.288 [2024-07-25 17:15:16.533139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.533151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.533206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.533221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.288 [2024-07-25 17:15:16.533233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.533243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.533317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.533352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.288 [2024-07-25 17:15:16.533381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.533392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.533429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.533442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.288 [2024-07-25 17:15:16.533453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.533464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.621399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.621476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.288 [2024-07-25 17:15:16.621509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.621520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.701972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.288 [2024-07-25 17:15:16.702103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.288 [2024-07-25 17:15:16.702262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.288 [2024-07-25 17:15:16.702381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.288 [2024-07-25 17:15:16.702565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.288 [2024-07-25 17:15:16.702694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.288 [2024-07-25 17:15:16.702801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.288 [2024-07-25 17:15:16.702813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.288 [2024-07-25 17:15:16.702866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.288 [2024-07-25 17:15:16.702883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.288 [2024-07-25 17:15:16.702895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.289 [2024-07-25 17:15:16.702908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.289 [2024-07-25 17:15:16.703201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.552 ms, result 0 00:24:25.663 00:24:25.663 00:24:25.663 17:15:17 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:25.663 [2024-07-25 17:15:17.851263] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:25.663 [2024-07-25 17:15:17.851433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81143 ] 00:24:25.663 [2024-07-25 17:15:18.010252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.922 [2024-07-25 17:15:18.221638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.180 [2024-07-25 17:15:18.545961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.180 [2024-07-25 17:15:18.546087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.440 [2024-07-25 17:15:18.708556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.708622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.440 [2024-07-25 17:15:18.708658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.440 [2024-07-25 17:15:18.708670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.708732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.708750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.440 [2024-07-25 17:15:18.708762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:26.440 [2024-07-25 17:15:18.708777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.708810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.440 [2024-07-25 17:15:18.709817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.440 [2024-07-25 17:15:18.709857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.709893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.440 [2024-07-25 17:15:18.709905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:24:26.440 [2024-07-25 17:15:18.709916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.712110] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.440 [2024-07-25 17:15:18.729173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.729231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.440 [2024-07-25 17:15:18.729266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.066 ms 00:24:26.440 [2024-07-25 17:15:18.729278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.729366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.729418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.440 [2024-07-25 17:15:18.729430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:26.440 [2024-07-25 17:15:18.729440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.739639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.739694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.440 [2024-07-25 17:15:18.739725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.074 ms 00:24:26.440 [2024-07-25 17:15:18.739736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.739830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.739848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.440 [2024-07-25 17:15:18.739860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:26.440 [2024-07-25 17:15:18.739870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.739988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.740023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.440 [2024-07-25 17:15:18.740050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:26.440 [2024-07-25 17:15:18.740065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.740103] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:26.440 [2024-07-25 17:15:18.745195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.745265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.440 [2024-07-25 17:15:18.745298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:24:26.440 [2024-07-25 17:15:18.745308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.745359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.440 [2024-07-25 17:15:18.745390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.440 [2024-07-25 17:15:18.745401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:26.440 [2024-07-25 17:15:18.745412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.440 [2024-07-25 17:15:18.745455] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.441 [2024-07-25 17:15:18.745486] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:26.441 [2024-07-25 17:15:18.745559] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.441 [2024-07-25 17:15:18.745582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:26.441 [2024-07-25 17:15:18.745680] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:26.441 [2024-07-25 17:15:18.745695] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.441 [2024-07-25 17:15:18.745709] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:26.441 [2024-07-25 17:15:18.745723] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.441 [2024-07-25 17:15:18.745736] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.441 [2024-07-25 17:15:18.745749] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:26.441 [2024-07-25 17:15:18.745759] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.441 [2024-07-25 17:15:18.745770] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:26.441 [2024-07-25 17:15:18.745781] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:26.441 [2024-07-25 17:15:18.745803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.441 [2024-07-25 17:15:18.745828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.441 [2024-07-25 17:15:18.745839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:26.441 [2024-07-25 17:15:18.745850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.441 [2024-07-25 17:15:18.745957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.441 [2024-07-25 17:15:18.745993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.441 [2024-07-25 17:15:18.746006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:26.441 [2024-07-25 17:15:18.746031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.441 [2024-07-25 17:15:18.746148] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.441 [2024-07-25 17:15:18.746167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.441 [2024-07-25 17:15:18.746191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.441 [2024-07-25 17:15:18.746224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.441 [2024-07-25 17:15:18.746256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.441 [2024-07-25 17:15:18.746277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.441 [2024-07-25 17:15:18.746287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:26.441 [2024-07-25 17:15:18.746297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.441 [2024-07-25 17:15:18.746307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.441 [2024-07-25 17:15:18.746318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:26.441 [2024-07-25 17:15:18.746332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.441 [2024-07-25 17:15:18.746367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.441 [2024-07-25 17:15:18.746410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.441 [2024-07-25 17:15:18.746449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.441 [2024-07-25 17:15:18.746479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.441 [2024-07-25 17:15:18.746509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.441 [2024-07-25 17:15:18.746537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.441 [2024-07-25 17:15:18.746557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.441 [2024-07-25 17:15:18.746566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:26.441 [2024-07-25 17:15:18.746576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.441 [2024-07-25 17:15:18.746586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:26.441 [2024-07-25 17:15:18.746596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:26.441 [2024-07-25 17:15:18.746606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:26.441 [2024-07-25 17:15:18.746626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:26.441 [2024-07-25 17:15:18.746660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746670] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.441 [2024-07-25 17:15:18.746690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.441 [2024-07-25 17:15:18.746701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.441 [2024-07-25 17:15:18.746729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.441 [2024-07-25 17:15:18.746740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.441 [2024-07-25 17:15:18.746751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.441 [2024-07-25 17:15:18.746761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.441 [2024-07-25 17:15:18.746771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.441 [2024-07-25 17:15:18.746781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.441 [2024-07-25 17:15:18.746793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.441 [2024-07-25 17:15:18.746806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.746818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:26.441 [2024-07-25 17:15:18.746828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:26.441 [2024-07-25 17:15:18.746839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:26.441 [2024-07-25 17:15:18.746849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:26.441 [2024-07-25 17:15:18.746859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:26.441 [2024-07-25 17:15:18.746870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:26.441 [2024-07-25 17:15:18.746880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:26.441 [2024-07-25 17:15:18.746890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:26.441 [2024-07-25 17:15:18.746900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:26.441 [2024-07-25 17:15:18.746911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.746921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.746931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.746941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.746952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:26.441 [2024-07-25 17:15:18.746962] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.441 [2024-07-25 17:15:18.747007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.747026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.441 [2024-07-25 17:15:18.747052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.441 [2024-07-25 17:15:18.747063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.442 [2024-07-25 17:15:18.747073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.442 [2024-07-25 17:15:18.747084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.747111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.442 [2024-07-25 17:15:18.747132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:24:26.442 [2024-07-25 17:15:18.747143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.793421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.793491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.442 [2024-07-25 17:15:18.793538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.211 ms 00:24:26.442 [2024-07-25 17:15:18.793549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.793664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.793681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.442 [2024-07-25 17:15:18.793693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:26.442 [2024-07-25 17:15:18.793703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.832892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.832964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.442 [2024-07-25 17:15:18.833003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.060 ms 00:24:26.442 [2024-07-25 17:15:18.833022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.833069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.833086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.442 [2024-07-25 17:15:18.833098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.442 [2024-07-25 17:15:18.833114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.833806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.833840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.442 [2024-07-25 17:15:18.833855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:24:26.442 [2024-07-25 17:15:18.833865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.834077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.834099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.442 [2024-07-25 17:15:18.834111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:24:26.442 [2024-07-25 17:15:18.834123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.851291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.851345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.442 [2024-07-25 17:15:18.851361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.120 ms 00:24:26.442 [2024-07-25 17:15:18.851377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.866572] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:26.442 [2024-07-25 17:15:18.866628] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.442 [2024-07-25 17:15:18.866710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.866721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.442 [2024-07-25 17:15:18.866734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.214 ms 00:24:26.442 [2024-07-25 17:15:18.866744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.892273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.892336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.442 [2024-07-25 17:15:18.892367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.485 ms 00:24:26.442 [2024-07-25 17:15:18.892377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.442 [2024-07-25 17:15:18.906023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.442 [2024-07-25 17:15:18.906077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.442 [2024-07-25 17:15:18.906107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.601 ms 00:24:26.442 [2024-07-25 17:15:18.906117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:18.919783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:18.919838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.701 [2024-07-25 17:15:18.919869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:24:26.701 [2024-07-25 17:15:18.919880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:18.920713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:18.920748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.701 [2024-07-25 17:15:18.920779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:24:26.701 [2024-07-25 17:15:18.920804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.004276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.004365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.701 [2024-07-25 17:15:19.004414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.442 ms 00:24:26.701 [2024-07-25 17:15:19.004433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.015248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:26.701 [2024-07-25 17:15:19.017637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.017685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.701 [2024-07-25 17:15:19.017716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.133 ms 00:24:26.701 [2024-07-25 17:15:19.017727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.017853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.017873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.701 [2024-07-25 17:15:19.017885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.701 [2024-07-25 17:15:19.017895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.018051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.018071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.701 [2024-07-25 17:15:19.018083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:26.701 [2024-07-25 17:15:19.018095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.018129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.018146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.701 [2024-07-25 17:15:19.018173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.701 [2024-07-25 17:15:19.018194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.701 [2024-07-25 17:15:19.018234] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.701 [2024-07-25 17:15:19.018250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.701 [2024-07-25 17:15:19.018266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.701 [2024-07-25 17:15:19.018277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:26.701 [2024-07-25 17:15:19.018288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.702 [2024-07-25 17:15:19.045402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.702 [2024-07-25 17:15:19.045460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.702 [2024-07-25 17:15:19.045493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.086 ms 00:24:26.702 [2024-07-25 17:15:19.045511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.702 [2024-07-25 17:15:19.045602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.702 [2024-07-25 17:15:19.045620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.702 [2024-07-25 17:15:19.045642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:26.702 [2024-07-25 17:15:19.045653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.702 [2024-07-25 17:15:19.047274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.070 ms, result 0 00:25:11.939  Copying: 22/1024 [MB] (22 MBps) Copying: 44/1024 [MB] (21 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (22 MBps) Copying: 112/1024 [MB] (22 MBps) Copying: 134/1024 [MB] (22 MBps) Copying: 157/1024 [MB] (22 MBps) Copying: 178/1024 [MB] (21 MBps) Copying: 200/1024 [MB] (21 MBps) Copying: 222/1024 [MB] (22 MBps) Copying: 244/1024 [MB] (22 MBps) Copying: 267/1024 [MB] (22 MBps) Copying: 290/1024 [MB] (22 MBps) Copying: 313/1024 [MB] (23 MBps) Copying: 337/1024 [MB] (23 MBps) Copying: 361/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (23 MBps) Copying: 408/1024 [MB] (24 MBps) Copying: 433/1024 [MB] (24 MBps) Copying: 458/1024 [MB] (25 MBps) Copying: 483/1024 [MB] (25 MBps) Copying: 509/1024 [MB] (26 MBps) Copying: 534/1024 [MB] (24 MBps) Copying: 558/1024 [MB] (23 MBps) Copying: 582/1024 [MB] (24 MBps) Copying: 605/1024 [MB] (22 MBps) Copying: 628/1024 [MB] (22 MBps) Copying: 651/1024 [MB] (23 MBps) Copying: 674/1024 [MB] (22 MBps) Copying: 697/1024 [MB] (23 MBps) Copying: 720/1024 [MB] (22 MBps) Copying: 742/1024 [MB] (22 MBps) Copying: 766/1024 [MB] (23 MBps) Copying: 788/1024 [MB] (22 MBps) Copying: 811/1024 [MB] (22 MBps) Copying: 834/1024 [MB] (23 MBps) Copying: 858/1024 [MB] (23 MBps) Copying: 882/1024 [MB] (23 MBps) Copying: 906/1024 [MB] (23 MBps) Copying: 930/1024 [MB] (24 MBps) Copying: 952/1024 [MB] (21 MBps) Copying: 974/1024 [MB] (22 MBps) Copying: 997/1024 [MB] (22 MBps) Copying: 1020/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-25 17:16:04.183775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.183942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.939 [2024-07-25 17:16:04.184020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:11.939 [2024-07-25 17:16:04.184050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.184119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.939 [2024-07-25 17:16:04.189095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.189139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.939 [2024-07-25 17:16:04.189156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 00:25:11.939 [2024-07-25 17:16:04.189174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.189510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.189534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.939 [2024-07-25 17:16:04.189547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:25:11.939 [2024-07-25 17:16:04.189558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.192968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.193039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.939 [2024-07-25 17:16:04.193072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:25:11.939 [2024-07-25 17:16:04.193083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.199657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.199715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:11.939 [2024-07-25 17:16:04.199745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.544 ms 00:25:11.939 [2024-07-25 17:16:04.199755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.228580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.228642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.939 [2024-07-25 17:16:04.228676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.733 ms 00:25:11.939 [2024-07-25 17:16:04.228687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.245160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.245220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.939 [2024-07-25 17:16:04.245252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.428 ms 00:25:11.939 [2024-07-25 17:16:04.245264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.245482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.245507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.939 [2024-07-25 17:16:04.245526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:11.939 [2024-07-25 17:16:04.245537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.273460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.273520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:11.939 [2024-07-25 17:16:04.273552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.889 ms 00:25:11.939 [2024-07-25 17:16:04.273562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.300575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.300647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:11.939 [2024-07-25 17:16:04.300679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.971 ms 00:25:11.939 [2024-07-25 17:16:04.300690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.327693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.327752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.939 [2024-07-25 17:16:04.327797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.960 ms 00:25:11.939 [2024-07-25 17:16:04.327808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.354737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.939 [2024-07-25 17:16:04.354798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.939 [2024-07-25 17:16:04.354829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:25:11.939 [2024-07-25 17:16:04.354840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.939 [2024-07-25 17:16:04.354882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.939 [2024-07-25 17:16:04.354912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:11.939 [2024-07-25 17:16:04.354927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.354938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.354949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.354960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.355993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.356004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.356015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.356026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.356036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.940 [2024-07-25 17:16:04.356047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.941 [2024-07-25 17:16:04.356188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.941 [2024-07-25 17:16:04.356200] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:25:11.941 [2024-07-25 17:16:04.356218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:11.941 [2024-07-25 17:16:04.356229] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:11.941 [2024-07-25 17:16:04.356240] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:11.941 [2024-07-25 17:16:04.356251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:11.941 [2024-07-25 17:16:04.356261] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.941 [2024-07-25 17:16:04.356272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.941 [2024-07-25 17:16:04.356283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.941 [2024-07-25 17:16:04.356292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.941 [2024-07-25 17:16:04.356302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.941 [2024-07-25 17:16:04.356312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.941 [2024-07-25 17:16:04.356323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.941 [2024-07-25 17:16:04.356339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:25:11.941 [2024-07-25 17:16:04.356350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.941 [2024-07-25 17:16:04.371920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.941 [2024-07-25 17:16:04.372007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.941 [2024-07-25 17:16:04.372054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.528 ms 00:25:11.941 [2024-07-25 17:16:04.372066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.941 [2024-07-25 17:16:04.372614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.941 [2024-07-25 17:16:04.372645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.941 [2024-07-25 17:16:04.372660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:25:11.941 [2024-07-25 17:16:04.372687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.408020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.408091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.199 [2024-07-25 17:16:04.408123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.408134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.408211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.408232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.199 [2024-07-25 17:16:04.408244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.408254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.408389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.408424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.199 [2024-07-25 17:16:04.408436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.408447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.408470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.408485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.199 [2024-07-25 17:16:04.408496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.408507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.501361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.501462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.199 [2024-07-25 17:16:04.501498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.501510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.583749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.583842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.199 [2024-07-25 17:16:04.583877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.583890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.583976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.199 [2024-07-25 17:16:04.584048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.199 [2024-07-25 17:16:04.584185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.199 [2024-07-25 17:16:04.584365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.199 [2024-07-25 17:16:04.584459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.199 [2024-07-25 17:16:04.584566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.199 [2024-07-25 17:16:04.584649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.199 [2024-07-25 17:16:04.584662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.199 [2024-07-25 17:16:04.584673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.199 [2024-07-25 17:16:04.584893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.091 ms, result 0 00:25:13.135 00:25:13.135 00:25:13.135 17:16:05 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:15.667 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:15.667 17:16:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:15.667 [2024-07-25 17:16:07.622799] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:15.667 [2024-07-25 17:16:07.623017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81636 ] 00:25:15.667 [2024-07-25 17:16:07.786741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.667 [2024-07-25 17:16:08.030116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.926 [2024-07-25 17:16:08.353684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:15.926 [2024-07-25 17:16:08.353799] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.185 [2024-07-25 17:16:08.516058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.516133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:16.185 [2024-07-25 17:16:08.516170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:16.185 [2024-07-25 17:16:08.516182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.516248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.516268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.185 [2024-07-25 17:16:08.516280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:16.185 [2024-07-25 17:16:08.516295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.516331] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:16.185 [2024-07-25 17:16:08.517310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:16.185 [2024-07-25 17:16:08.517372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.517387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.185 [2024-07-25 17:16:08.517400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:25:16.185 [2024-07-25 17:16:08.517411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.519487] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:16.185 [2024-07-25 17:16:08.535703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.535765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:16.185 [2024-07-25 17:16:08.535783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.218 ms 00:25:16.185 [2024-07-25 17:16:08.535795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.535896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.535922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:16.185 [2024-07-25 17:16:08.535944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:16.185 [2024-07-25 17:16:08.535955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.545528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.545591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.185 [2024-07-25 17:16:08.545622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.459 ms 00:25:16.185 [2024-07-25 17:16:08.545633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.545747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.545768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.185 [2024-07-25 17:16:08.545780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:16.185 [2024-07-25 17:16:08.545790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.185 [2024-07-25 17:16:08.545887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.185 [2024-07-25 17:16:08.545905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.186 [2024-07-25 17:16:08.545918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:16.186 [2024-07-25 17:16:08.545929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.186 [2024-07-25 17:16:08.545967] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:16.186 [2024-07-25 17:16:08.551251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.186 [2024-07-25 17:16:08.551308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.186 [2024-07-25 17:16:08.551339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.294 ms 00:25:16.186 [2024-07-25 17:16:08.551349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.186 [2024-07-25 17:16:08.551424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.186 [2024-07-25 17:16:08.551443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.186 [2024-07-25 17:16:08.551455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:16.186 [2024-07-25 17:16:08.551465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.186 [2024-07-25 17:16:08.551511] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:16.186 [2024-07-25 17:16:08.551586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:16.186 [2024-07-25 17:16:08.551632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:16.186 [2024-07-25 17:16:08.551665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:16.186 [2024-07-25 17:16:08.551792] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:16.186 [2024-07-25 17:16:08.551820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.186 [2024-07-25 17:16:08.551837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:16.186 [2024-07-25 17:16:08.551852] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.186 [2024-07-25 17:16:08.551866] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.186 [2024-07-25 17:16:08.551879] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:16.186 [2024-07-25 17:16:08.551890] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.186 [2024-07-25 17:16:08.551901] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:16.186 [2024-07-25 17:16:08.551912] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:16.186 [2024-07-25 17:16:08.551925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.186 [2024-07-25 17:16:08.551943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.186 [2024-07-25 17:16:08.551956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:25:16.186 [2024-07-25 17:16:08.551967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.186 [2024-07-25 17:16:08.552088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.186 [2024-07-25 17:16:08.552106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.186 [2024-07-25 17:16:08.552118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:16.186 [2024-07-25 17:16:08.552129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.186 [2024-07-25 17:16:08.552238] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.186 [2024-07-25 17:16:08.552256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.186 [2024-07-25 17:16:08.552274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.186 [2024-07-25 17:16:08.552308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.186 [2024-07-25 17:16:08.552340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.186 [2024-07-25 17:16:08.552361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.186 [2024-07-25 17:16:08.552371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:16.186 [2024-07-25 17:16:08.552381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.186 [2024-07-25 17:16:08.552391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.186 [2024-07-25 17:16:08.552401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:16.186 [2024-07-25 17:16:08.552411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.186 [2024-07-25 17:16:08.552435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.186 [2024-07-25 17:16:08.552480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.186 [2024-07-25 17:16:08.552512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.186 [2024-07-25 17:16:08.552556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.186 [2024-07-25 17:16:08.552588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.186 [2024-07-25 17:16:08.552619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.186 [2024-07-25 17:16:08.552640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.186 [2024-07-25 17:16:08.552650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:16.186 [2024-07-25 17:16:08.552660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.186 [2024-07-25 17:16:08.552671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:16.186 [2024-07-25 17:16:08.552681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:16.186 [2024-07-25 17:16:08.552691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:16.186 [2024-07-25 17:16:08.552712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:16.186 [2024-07-25 17:16:08.552722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552732] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.186 [2024-07-25 17:16:08.552743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.186 [2024-07-25 17:16:08.552754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.186 [2024-07-25 17:16:08.552776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.186 [2024-07-25 17:16:08.552787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.186 [2024-07-25 17:16:08.552799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.186 [2024-07-25 17:16:08.552811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.186 [2024-07-25 17:16:08.552821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.186 [2024-07-25 17:16:08.552832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.186 [2024-07-25 17:16:08.552844] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.186 [2024-07-25 17:16:08.552858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.186 [2024-07-25 17:16:08.552871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:16.186 [2024-07-25 17:16:08.552891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:16.186 [2024-07-25 17:16:08.552903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:16.186 [2024-07-25 17:16:08.552914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:16.186 [2024-07-25 17:16:08.552935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:16.186 [2024-07-25 17:16:08.552946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:16.186 [2024-07-25 17:16:08.552957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:16.186 [2024-07-25 17:16:08.552968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:16.186 [2024-07-25 17:16:08.552995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:16.186 [2024-07-25 17:16:08.553008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:16.186 [2024-07-25 17:16:08.553020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:16.187 [2024-07-25 17:16:08.553031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:16.187 [2024-07-25 17:16:08.553044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:16.187 [2024-07-25 17:16:08.553056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:16.187 [2024-07-25 17:16:08.553067] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.187 [2024-07-25 17:16:08.553081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.187 [2024-07-25 17:16:08.553099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.187 [2024-07-25 17:16:08.553111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.187 [2024-07-25 17:16:08.553123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.187 [2024-07-25 17:16:08.553135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.187 [2024-07-25 17:16:08.553147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.553159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.187 [2024-07-25 17:16:08.553171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:25:16.187 [2024-07-25 17:16:08.553182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.600825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.600907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.187 [2024-07-25 17:16:08.600976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.577 ms 00:25:16.187 [2024-07-25 17:16:08.600989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.601126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.601146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:16.187 [2024-07-25 17:16:08.601159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:16.187 [2024-07-25 17:16:08.601186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.643131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.643199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.187 [2024-07-25 17:16:08.643233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.843 ms 00:25:16.187 [2024-07-25 17:16:08.643244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.643308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.643325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.187 [2024-07-25 17:16:08.643338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.187 [2024-07-25 17:16:08.643354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.644128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.644184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.187 [2024-07-25 17:16:08.644198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:25:16.187 [2024-07-25 17:16:08.644210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.187 [2024-07-25 17:16:08.644417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.187 [2024-07-25 17:16:08.644477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.187 [2024-07-25 17:16:08.644491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:25:16.187 [2024-07-25 17:16:08.644503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.446 [2024-07-25 17:16:08.662475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.446 [2024-07-25 17:16:08.662534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.446 [2024-07-25 17:16:08.662566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.924 ms 00:25:16.446 [2024-07-25 17:16:08.662582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.446 [2024-07-25 17:16:08.678197] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:16.446 [2024-07-25 17:16:08.678242] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:16.446 [2024-07-25 17:16:08.678276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.446 [2024-07-25 17:16:08.678287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:16.446 [2024-07-25 17:16:08.678300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.519 ms 00:25:16.446 [2024-07-25 17:16:08.678310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.446 [2024-07-25 17:16:08.704917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.446 [2024-07-25 17:16:08.705003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:16.446 [2024-07-25 17:16:08.705021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.562 ms 00:25:16.446 [2024-07-25 17:16:08.705032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.446 [2024-07-25 17:16:08.719474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.446 [2024-07-25 17:16:08.719533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:16.446 [2024-07-25 17:16:08.719576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.389 ms 00:25:16.447 [2024-07-25 17:16:08.719586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.733400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.733457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:16.447 [2024-07-25 17:16:08.733488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.772 ms 00:25:16.447 [2024-07-25 17:16:08.733498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.734390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.734438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:16.447 [2024-07-25 17:16:08.734468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:25:16.447 [2024-07-25 17:16:08.734479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.810458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.810563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:16.447 [2024-07-25 17:16:08.810583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.951 ms 00:25:16.447 [2024-07-25 17:16:08.810602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.821477] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:16.447 [2024-07-25 17:16:08.824146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.824199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:16.447 [2024-07-25 17:16:08.824230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.452 ms 00:25:16.447 [2024-07-25 17:16:08.824241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.824341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.824360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:16.447 [2024-07-25 17:16:08.824373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:16.447 [2024-07-25 17:16:08.824387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.824528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.824552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:16.447 [2024-07-25 17:16:08.824565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:16.447 [2024-07-25 17:16:08.824576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.824623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.824639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:16.447 [2024-07-25 17:16:08.824651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:16.447 [2024-07-25 17:16:08.824661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.824702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:16.447 [2024-07-25 17:16:08.824722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.824734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:16.447 [2024-07-25 17:16:08.824746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:16.447 [2024-07-25 17:16:08.824756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.852039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.852100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:16.447 [2024-07-25 17:16:08.852138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.257 ms 00:25:16.447 [2024-07-25 17:16:08.852153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.852228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.447 [2024-07-25 17:16:08.852247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:16.447 [2024-07-25 17:16:08.852259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:16.447 [2024-07-25 17:16:08.852269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.447 [2024-07-25 17:16:08.853940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.287 ms, result 0 00:26:02.918  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (21 MBps) Copying: 65/1024 [MB] (21 MBps) Copying: 86/1024 [MB] (21 MBps) Copying: 107/1024 [MB] (20 MBps) Copying: 129/1024 [MB] (22 MBps) Copying: 152/1024 [MB] (22 MBps) Copying: 174/1024 [MB] (21 MBps) Copying: 195/1024 [MB] (21 MBps) Copying: 217/1024 [MB] (22 MBps) Copying: 239/1024 [MB] (22 MBps) Copying: 262/1024 [MB] (22 MBps) Copying: 284/1024 [MB] (22 MBps) Copying: 307/1024 [MB] (22 MBps) Copying: 329/1024 [MB] (21 MBps) Copying: 351/1024 [MB] (22 MBps) Copying: 374/1024 [MB] (22 MBps) Copying: 394/1024 [MB] (20 MBps) Copying: 415/1024 [MB] (21 MBps) Copying: 437/1024 [MB] (21 MBps) Copying: 460/1024 [MB] (23 MBps) Copying: 483/1024 [MB] (22 MBps) Copying: 505/1024 [MB] (22 MBps) Copying: 528/1024 [MB] (22 MBps) Copying: 551/1024 [MB] (22 MBps) Copying: 574/1024 [MB] (22 MBps) Copying: 597/1024 [MB] (22 MBps) Copying: 619/1024 [MB] (22 MBps) Copying: 641/1024 [MB] (22 MBps) Copying: 664/1024 [MB] (22 MBps) Copying: 687/1024 [MB] (22 MBps) Copying: 710/1024 [MB] (23 MBps) Copying: 733/1024 [MB] (23 MBps) Copying: 757/1024 [MB] (23 MBps) Copying: 780/1024 [MB] (23 MBps) Copying: 803/1024 [MB] (22 MBps) Copying: 827/1024 [MB] (23 MBps) Copying: 850/1024 [MB] (23 MBps) Copying: 874/1024 [MB] (23 MBps) Copying: 897/1024 [MB] (23 MBps) Copying: 920/1024 [MB] (22 MBps) Copying: 943/1024 [MB] (22 MBps) Copying: 966/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (23 MBps) Copying: 1012/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-25 17:16:55.341516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.918 [2024-07-25 17:16:55.341613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.918 [2024-07-25 17:16:55.341666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.918 [2024-07-25 17:16:55.341693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.918 [2024-07-25 17:16:55.343822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.918 [2024-07-25 17:16:55.349612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.918 [2024-07-25 17:16:55.349671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.918 [2024-07-25 17:16:55.349686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.709 ms 00:26:02.918 [2024-07-25 17:16:55.349696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.918 [2024-07-25 17:16:55.363530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.918 [2024-07-25 17:16:55.363602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.918 [2024-07-25 17:16:55.363618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.567 ms 00:26:02.918 [2024-07-25 17:16:55.363640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.918 [2024-07-25 17:16:55.384876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.918 [2024-07-25 17:16:55.384925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.918 [2024-07-25 17:16:55.384940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.215 ms 00:26:02.918 [2024-07-25 17:16:55.384950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.390451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.390497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:03.176 [2024-07-25 17:16:55.390510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.458 ms 00:26:03.176 [2024-07-25 17:16:55.390520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.418192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.418243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:03.176 [2024-07-25 17:16:55.418257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.593 ms 00:26:03.176 [2024-07-25 17:16:55.418266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.434045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.434102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:03.176 [2024-07-25 17:16:55.434127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.742 ms 00:26:03.176 [2024-07-25 17:16:55.434137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.544640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.544704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:03.176 [2024-07-25 17:16:55.544738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.460 ms 00:26:03.176 [2024-07-25 17:16:55.544750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.573549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.573600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:03.176 [2024-07-25 17:16:55.573615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.779 ms 00:26:03.176 [2024-07-25 17:16:55.573625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.600409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.600462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:03.176 [2024-07-25 17:16:55.600476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.746 ms 00:26:03.176 [2024-07-25 17:16:55.600487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.176 [2024-07-25 17:16:55.627044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.176 [2024-07-25 17:16:55.627094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:03.176 [2024-07-25 17:16:55.627121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.503 ms 00:26:03.176 [2024-07-25 17:16:55.627131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.436 [2024-07-25 17:16:55.653306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.436 [2024-07-25 17:16:55.653357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:03.436 [2024-07-25 17:16:55.653371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.098 ms 00:26:03.436 [2024-07-25 17:16:55.653381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.436 [2024-07-25 17:16:55.653419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:03.436 [2024-07-25 17:16:55.653440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112640 / 261120 wr_cnt: 1 state: open 00:26:03.436 [2024-07-25 17:16:55.653454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:03.436 [2024-07-25 17:16:55.653732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.653983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:03.437 [2024-07-25 17:16:55.654895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:03.437 [2024-07-25 17:16:55.654906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:26:03.437 [2024-07-25 17:16:55.654926] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112640 00:26:03.437 [2024-07-25 17:16:55.654936] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113600 00:26:03.437 [2024-07-25 17:16:55.654951] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112640 00:26:03.437 [2024-07-25 17:16:55.654962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:26:03.437 [2024-07-25 17:16:55.654972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:03.437 [2024-07-25 17:16:55.655013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:03.437 [2024-07-25 17:16:55.655032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:03.437 [2024-07-25 17:16:55.655049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:03.437 [2024-07-25 17:16:55.655065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:03.437 [2024-07-25 17:16:55.655084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.437 [2024-07-25 17:16:55.655098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:03.437 [2024-07-25 17:16:55.655109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:26:03.438 [2024-07-25 17:16:55.655120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.670318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.438 [2024-07-25 17:16:55.670367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.438 [2024-07-25 17:16:55.670393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.160 ms 00:26:03.438 [2024-07-25 17:16:55.670407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.670860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.438 [2024-07-25 17:16:55.670892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.438 [2024-07-25 17:16:55.670906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:26:03.438 [2024-07-25 17:16:55.670921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.704487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.704564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.438 [2024-07-25 17:16:55.704578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.704596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.704653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.704668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.438 [2024-07-25 17:16:55.704678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.704687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.704775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.704793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.438 [2024-07-25 17:16:55.704810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.704820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.704840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.704853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.438 [2024-07-25 17:16:55.704863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.704872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.795086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.795148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.438 [2024-07-25 17:16:55.795170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.795181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.865806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.865874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.438 [2024-07-25 17:16:55.865888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.865898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.865958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.865974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.438 [2024-07-25 17:16:55.865985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.866187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.438 [2024-07-25 17:16:55.866199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.866338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.438 [2024-07-25 17:16:55.866350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.866478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.438 [2024-07-25 17:16:55.866489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.866590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.438 [2024-07-25 17:16:55.866600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.438 [2024-07-25 17:16:55.866714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.438 [2024-07-25 17:16:55.866725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.438 [2024-07-25 17:16:55.866735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.438 [2024-07-25 17:16:55.866877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.876 ms, result 0 00:26:04.815 00:26:04.815 00:26:04.815 17:16:57 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:05.073 [2024-07-25 17:16:57.376942] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:05.073 [2024-07-25 17:16:57.377139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82133 ] 00:26:05.331 [2024-07-25 17:16:57.548639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.331 [2024-07-25 17:16:57.747537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.899 [2024-07-25 17:16:58.069719] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.899 [2024-07-25 17:16:58.069800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.899 [2024-07-25 17:16:58.229919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.229986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.899 [2024-07-25 17:16:58.230017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.899 [2024-07-25 17:16:58.230028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.230121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.230139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.899 [2024-07-25 17:16:58.230151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:05.899 [2024-07-25 17:16:58.230166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.230198] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.899 [2024-07-25 17:16:58.231093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.899 [2024-07-25 17:16:58.231125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.231137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.899 [2024-07-25 17:16:58.231149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:26:05.899 [2024-07-25 17:16:58.231159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.233297] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:05.899 [2024-07-25 17:16:58.248301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.248349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:05.899 [2024-07-25 17:16:58.248364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:26:05.899 [2024-07-25 17:16:58.248374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.248440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.248461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:05.899 [2024-07-25 17:16:58.248472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:05.899 [2024-07-25 17:16:58.248482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.257268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.257316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.899 [2024-07-25 17:16:58.257331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.688 ms 00:26:05.899 [2024-07-25 17:16:58.257340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.257434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.257451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.899 [2024-07-25 17:16:58.257462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:05.899 [2024-07-25 17:16:58.257472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.257527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.257542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.899 [2024-07-25 17:16:58.257553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:05.899 [2024-07-25 17:16:58.257564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.257594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.899 [2024-07-25 17:16:58.262200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.262230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.899 [2024-07-25 17:16:58.262243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:26:05.899 [2024-07-25 17:16:58.262252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.262293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.899 [2024-07-25 17:16:58.262308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.899 [2024-07-25 17:16:58.262319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:05.899 [2024-07-25 17:16:58.262328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.899 [2024-07-25 17:16:58.262392] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:05.899 [2024-07-25 17:16:58.262423] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:05.900 [2024-07-25 17:16:58.262459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:05.900 [2024-07-25 17:16:58.262480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:05.900 [2024-07-25 17:16:58.262571] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.900 [2024-07-25 17:16:58.262585] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.900 [2024-07-25 17:16:58.262598] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:05.900 [2024-07-25 17:16:58.262611] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.900 [2024-07-25 17:16:58.262622] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.900 [2024-07-25 17:16:58.262633] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:05.900 [2024-07-25 17:16:58.262671] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.900 [2024-07-25 17:16:58.262692] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.900 [2024-07-25 17:16:58.262701] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.900 [2024-07-25 17:16:58.262712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.900 [2024-07-25 17:16:58.262728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.900 [2024-07-25 17:16:58.262739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:26:05.900 [2024-07-25 17:16:58.262749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.900 [2024-07-25 17:16:58.262831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.900 [2024-07-25 17:16:58.262845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.900 [2024-07-25 17:16:58.262855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:05.900 [2024-07-25 17:16:58.262865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.900 [2024-07-25 17:16:58.262957] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.900 [2024-07-25 17:16:58.262988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.900 [2024-07-25 17:16:58.263003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.900 [2024-07-25 17:16:58.263051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.900 [2024-07-25 17:16:58.263080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.900 [2024-07-25 17:16:58.263098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.900 [2024-07-25 17:16:58.263106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:05.900 [2024-07-25 17:16:58.263116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.900 [2024-07-25 17:16:58.263125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.900 [2024-07-25 17:16:58.263134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:05.900 [2024-07-25 17:16:58.263146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.900 [2024-07-25 17:16:58.263165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.900 [2024-07-25 17:16:58.263205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263214] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.900 [2024-07-25 17:16:58.263233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.900 [2024-07-25 17:16:58.263260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.900 [2024-07-25 17:16:58.263288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.900 [2024-07-25 17:16:58.263315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.900 [2024-07-25 17:16:58.263333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.900 [2024-07-25 17:16:58.263342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:05.900 [2024-07-25 17:16:58.263351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.900 [2024-07-25 17:16:58.263361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.900 [2024-07-25 17:16:58.263370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:05.900 [2024-07-25 17:16:58.263378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.900 [2024-07-25 17:16:58.263396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:05.900 [2024-07-25 17:16:58.263406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263415] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.900 [2024-07-25 17:16:58.263425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.900 [2024-07-25 17:16:58.263434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.900 [2024-07-25 17:16:58.263455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.900 [2024-07-25 17:16:58.263464] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.900 [2024-07-25 17:16:58.263474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.900 [2024-07-25 17:16:58.263483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.900 [2024-07-25 17:16:58.263492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.900 [2024-07-25 17:16:58.263502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.900 [2024-07-25 17:16:58.263512] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.900 [2024-07-25 17:16:58.263524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:05.900 [2024-07-25 17:16:58.263545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:05.900 [2024-07-25 17:16:58.263555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:05.900 [2024-07-25 17:16:58.263565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:05.900 [2024-07-25 17:16:58.263574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:05.900 [2024-07-25 17:16:58.263584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:05.900 [2024-07-25 17:16:58.263593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:05.900 [2024-07-25 17:16:58.263603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:05.900 [2024-07-25 17:16:58.263613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:05.900 [2024-07-25 17:16:58.263623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:05.900 [2024-07-25 17:16:58.263670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.900 [2024-07-25 17:16:58.263681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.900 [2024-07-25 17:16:58.263707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.900 [2024-07-25 17:16:58.263717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.900 [2024-07-25 17:16:58.263727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.900 [2024-07-25 17:16:58.263738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.263748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.901 [2024-07-25 17:16:58.263758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:26:05.901 [2024-07-25 17:16:58.263767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.313369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.313437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.901 [2024-07-25 17:16:58.313466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.544 ms 00:26:05.901 [2024-07-25 17:16:58.313477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.313590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.313606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.901 [2024-07-25 17:16:58.313618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:05.901 [2024-07-25 17:16:58.313628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.351637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.351694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.901 [2024-07-25 17:16:58.351710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.918 ms 00:26:05.901 [2024-07-25 17:16:58.351720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.351771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.351786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.901 [2024-07-25 17:16:58.351798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.901 [2024-07-25 17:16:58.351813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.352530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.352561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.901 [2024-07-25 17:16:58.352579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:26:05.901 [2024-07-25 17:16:58.352589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.901 [2024-07-25 17:16:58.352770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.901 [2024-07-25 17:16:58.352796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.901 [2024-07-25 17:16:58.352808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:26:05.901 [2024-07-25 17:16:58.352818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.369710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.369759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.160 [2024-07-25 17:16:58.369774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.860 ms 00:26:06.160 [2024-07-25 17:16:58.369788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.384799] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:06.160 [2024-07-25 17:16:58.384852] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:06.160 [2024-07-25 17:16:58.384884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.384894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:06.160 [2024-07-25 17:16:58.384905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.970 ms 00:26:06.160 [2024-07-25 17:16:58.384915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.410574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.410630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:06.160 [2024-07-25 17:16:58.410667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.619 ms 00:26:06.160 [2024-07-25 17:16:58.410678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.424034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.424082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:06.160 [2024-07-25 17:16:58.424096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.328 ms 00:26:06.160 [2024-07-25 17:16:58.424105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.436816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.436866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:06.160 [2024-07-25 17:16:58.436884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.673 ms 00:26:06.160 [2024-07-25 17:16:58.436909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.437691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.437726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:06.160 [2024-07-25 17:16:58.437740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:26:06.160 [2024-07-25 17:16:58.437750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.506700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.506785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:06.160 [2024-07-25 17:16:58.506804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.920 ms 00:26:06.160 [2024-07-25 17:16:58.506822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.160 [2024-07-25 17:16:58.517562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:06.160 [2024-07-25 17:16:58.520589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.160 [2024-07-25 17:16:58.520635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:06.160 [2024-07-25 17:16:58.520650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.702 ms 00:26:06.161 [2024-07-25 17:16:58.520660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.520762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.520780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:06.161 [2024-07-25 17:16:58.520793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:06.161 [2024-07-25 17:16:58.520802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.522736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.522783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:06.161 [2024-07-25 17:16:58.522797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:26:06.161 [2024-07-25 17:16:58.522808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.522845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.522861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:06.161 [2024-07-25 17:16:58.522873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:06.161 [2024-07-25 17:16:58.522884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.522923] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:06.161 [2024-07-25 17:16:58.522939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.522954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:06.161 [2024-07-25 17:16:58.522966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:06.161 [2024-07-25 17:16:58.523008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.550118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.550170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:06.161 [2024-07-25 17:16:58.550184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.059 ms 00:26:06.161 [2024-07-25 17:16:58.550202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.550281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.161 [2024-07-25 17:16:58.550297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:06.161 [2024-07-25 17:16:58.550308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:06.161 [2024-07-25 17:16:58.550318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.161 [2024-07-25 17:16:58.557984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.876 ms, result 0 00:26:50.058  Copying: 20/1024 [MB] (20 MBps) Copying: 44/1024 [MB] (23 MBps) Copying: 67/1024 [MB] (23 MBps) Copying: 91/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 234/1024 [MB] (23 MBps) Copying: 257/1024 [MB] (23 MBps) Copying: 281/1024 [MB] (23 MBps) Copying: 304/1024 [MB] (23 MBps) Copying: 328/1024 [MB] (23 MBps) Copying: 352/1024 [MB] (23 MBps) Copying: 375/1024 [MB] (23 MBps) Copying: 399/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (23 MBps) Copying: 446/1024 [MB] (23 MBps) Copying: 470/1024 [MB] (23 MBps) Copying: 493/1024 [MB] (23 MBps) Copying: 517/1024 [MB] (23 MBps) Copying: 540/1024 [MB] (23 MBps) Copying: 564/1024 [MB] (23 MBps) Copying: 587/1024 [MB] (23 MBps) Copying: 610/1024 [MB] (23 MBps) Copying: 634/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (22 MBps) Copying: 680/1024 [MB] (23 MBps) Copying: 703/1024 [MB] (23 MBps) Copying: 727/1024 [MB] (23 MBps) Copying: 751/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (23 MBps) Copying: 798/1024 [MB] (24 MBps) Copying: 822/1024 [MB] (23 MBps) Copying: 846/1024 [MB] (23 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (23 MBps) Copying: 917/1024 [MB] (23 MBps) Copying: 940/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (23 MBps) Copying: 987/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-25 17:17:42.411859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.412283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.058 [2024-07-25 17:17:42.412421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:50.058 [2024-07-25 17:17:42.412444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.412492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.058 [2024-07-25 17:17:42.416400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.416588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.058 [2024-07-25 17:17:42.416715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.884 ms 00:26:50.058 [2024-07-25 17:17:42.416780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.417181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.417353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.058 [2024-07-25 17:17:42.417391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:50.058 [2024-07-25 17:17:42.417402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.422576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.422630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.058 [2024-07-25 17:17:42.422709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:26:50.058 [2024-07-25 17:17:42.422721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.428078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.428114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.058 [2024-07-25 17:17:42.428142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.319 ms 00:26:50.058 [2024-07-25 17:17:42.428152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.453839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.453883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.058 [2024-07-25 17:17:42.453913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.639 ms 00:26:50.058 [2024-07-25 17:17:42.453922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.058 [2024-07-25 17:17:42.468884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.058 [2024-07-25 17:17:42.468926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.058 [2024-07-25 17:17:42.468962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.924 ms 00:26:50.058 [2024-07-25 17:17:42.468973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.597103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.318 [2024-07-25 17:17:42.597180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.318 [2024-07-25 17:17:42.597227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.090 ms 00:26:50.318 [2024-07-25 17:17:42.597237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.621776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.318 [2024-07-25 17:17:42.621814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:50.318 [2024-07-25 17:17:42.621844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.519 ms 00:26:50.318 [2024-07-25 17:17:42.621853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.645819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.318 [2024-07-25 17:17:42.645856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:50.318 [2024-07-25 17:17:42.645885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.913 ms 00:26:50.318 [2024-07-25 17:17:42.645894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.669509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.318 [2024-07-25 17:17:42.669553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.318 [2024-07-25 17:17:42.669582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.578 ms 00:26:50.318 [2024-07-25 17:17:42.669604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.693325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.318 [2024-07-25 17:17:42.693364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.318 [2024-07-25 17:17:42.693392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:26:50.318 [2024-07-25 17:17:42.693401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.318 [2024-07-25 17:17:42.693438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.318 [2024-07-25 17:17:42.693457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:26:50.318 [2024-07-25 17:17:42.693469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.318 [2024-07-25 17:17:42.693954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.693973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.693990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.319 [2024-07-25 17:17:42.694822] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.319 [2024-07-25 17:17:42.694836] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94fc35b7-b5e7-46b8-bc04-5da701b70015 00:26:50.319 [2024-07-25 17:17:42.694847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:26:50.319 [2024-07-25 17:17:42.694860] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21952 00:26:50.319 [2024-07-25 17:17:42.694888] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20992 00:26:50.319 [2024-07-25 17:17:42.694908] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0457 00:26:50.319 [2024-07-25 17:17:42.694919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.319 [2024-07-25 17:17:42.694931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.319 [2024-07-25 17:17:42.694944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.319 [2024-07-25 17:17:42.694968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.319 [2024-07-25 17:17:42.694977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.319 [2024-07-25 17:17:42.694991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.319 [2024-07-25 17:17:42.695027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.319 [2024-07-25 17:17:42.695043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:26:50.319 [2024-07-25 17:17:42.695053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.319 [2024-07-25 17:17:42.709424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.319 [2024-07-25 17:17:42.709477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.319 [2024-07-25 17:17:42.709515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.349 ms 00:26:50.319 [2024-07-25 17:17:42.709537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.319 [2024-07-25 17:17:42.710002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.319 [2024-07-25 17:17:42.710086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.319 [2024-07-25 17:17:42.710116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:26:50.319 [2024-07-25 17:17:42.710126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.319 [2024-07-25 17:17:42.741557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.319 [2024-07-25 17:17:42.741607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.319 [2024-07-25 17:17:42.741640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.319 [2024-07-25 17:17:42.741650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.319 [2024-07-25 17:17:42.741701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.319 [2024-07-25 17:17:42.741714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.319 [2024-07-25 17:17:42.741724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.319 [2024-07-25 17:17:42.741733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.319 [2024-07-25 17:17:42.741813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.320 [2024-07-25 17:17:42.741829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.320 [2024-07-25 17:17:42.741840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.320 [2024-07-25 17:17:42.741875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.320 [2024-07-25 17:17:42.741927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.320 [2024-07-25 17:17:42.741942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.320 [2024-07-25 17:17:42.741953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.320 [2024-07-25 17:17:42.741963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.577 [2024-07-25 17:17:42.828153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.577 [2024-07-25 17:17:42.828220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.577 [2024-07-25 17:17:42.828251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.577 [2024-07-25 17:17:42.828268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.577 [2024-07-25 17:17:42.897645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.897693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.578 [2024-07-25 17:17:42.897723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.897734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.897809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.897825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.578 [2024-07-25 17:17:42.897846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.897855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.897954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.897972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.578 [2024-07-25 17:17:42.898003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.898097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.898243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.898280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.578 [2024-07-25 17:17:42.898301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.898312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.898377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.898409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.578 [2024-07-25 17:17:42.898423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.898453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.898538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.898556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.578 [2024-07-25 17:17:42.898574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.898592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.898693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.578 [2024-07-25 17:17:42.898710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.578 [2024-07-25 17:17:42.898722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.578 [2024-07-25 17:17:42.898732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.578 [2024-07-25 17:17:42.898887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.996 ms, result 0 00:26:51.512 00:26:51.512 00:26:51.512 17:17:43 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.414 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.414 Process with pid 80420 is not found 00:26:53.414 Remove shared memory files 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80420 00:26:53.414 17:17:45 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80420 ']' 00:26:53.414 17:17:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80420 00:26:53.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80420) - No such process 00:26:53.414 17:17:45 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80420 is not found' 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:53.414 17:17:45 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:53.414 00:26:53.414 real 3m35.635s 00:26:53.414 user 3m20.528s 00:26:53.414 sys 0m15.680s 00:26:53.414 17:17:45 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.414 17:17:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:53.414 ************************************ 00:26:53.414 END TEST ftl_restore 00:26:53.414 ************************************ 00:26:53.672 17:17:45 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:53.672 17:17:45 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:53.672 17:17:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.672 17:17:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:53.672 ************************************ 00:26:53.672 START TEST ftl_dirty_shutdown 00:26:53.672 ************************************ 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:53.672 * Looking for test storage... 00:26:53.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:53.672 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.673 17:17:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82671 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82671 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82671 ']' 00:26:53.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.673 17:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.673 [2024-07-25 17:17:46.105119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:53.673 [2024-07-25 17:17:46.105280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82671 ] 00:26:53.931 [2024-07-25 17:17:46.266814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.189 [2024-07-25 17:17:46.481839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:54.756 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:55.322 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:55.581 { 00:26:55.581 "name": "nvme0n1", 00:26:55.581 "aliases": [ 00:26:55.581 "f8dc6754-5b4d-4ac9-8479-8202bceb75f7" 00:26:55.581 ], 00:26:55.581 "product_name": "NVMe disk", 00:26:55.581 "block_size": 4096, 00:26:55.581 "num_blocks": 1310720, 00:26:55.581 "uuid": "f8dc6754-5b4d-4ac9-8479-8202bceb75f7", 00:26:55.581 "assigned_rate_limits": { 00:26:55.581 "rw_ios_per_sec": 0, 00:26:55.581 "rw_mbytes_per_sec": 0, 00:26:55.581 "r_mbytes_per_sec": 0, 00:26:55.581 "w_mbytes_per_sec": 0 00:26:55.581 }, 00:26:55.581 "claimed": true, 00:26:55.581 "claim_type": "read_many_write_one", 00:26:55.581 "zoned": false, 00:26:55.581 "supported_io_types": { 00:26:55.581 "read": true, 00:26:55.581 "write": true, 00:26:55.581 "unmap": true, 00:26:55.581 "flush": true, 00:26:55.581 "reset": true, 00:26:55.581 "nvme_admin": true, 00:26:55.581 "nvme_io": true, 00:26:55.581 "nvme_io_md": false, 00:26:55.581 "write_zeroes": true, 00:26:55.581 "zcopy": false, 00:26:55.581 "get_zone_info": false, 00:26:55.581 "zone_management": false, 00:26:55.581 "zone_append": false, 00:26:55.581 "compare": true, 00:26:55.581 "compare_and_write": false, 00:26:55.581 "abort": true, 00:26:55.581 "seek_hole": false, 00:26:55.581 "seek_data": false, 00:26:55.581 "copy": true, 00:26:55.581 "nvme_iov_md": false 00:26:55.581 }, 00:26:55.581 "driver_specific": { 00:26:55.581 "nvme": [ 00:26:55.581 { 00:26:55.581 "pci_address": "0000:00:11.0", 00:26:55.581 "trid": { 00:26:55.581 "trtype": "PCIe", 00:26:55.581 "traddr": "0000:00:11.0" 00:26:55.581 }, 00:26:55.581 "ctrlr_data": { 00:26:55.581 "cntlid": 0, 00:26:55.581 "vendor_id": "0x1b36", 00:26:55.581 "model_number": "QEMU NVMe Ctrl", 00:26:55.581 "serial_number": "12341", 00:26:55.581 "firmware_revision": "8.0.0", 00:26:55.581 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:55.581 "oacs": { 00:26:55.581 "security": 0, 00:26:55.581 "format": 1, 00:26:55.581 "firmware": 0, 00:26:55.581 "ns_manage": 1 00:26:55.581 }, 00:26:55.581 "multi_ctrlr": false, 00:26:55.581 "ana_reporting": false 00:26:55.581 }, 00:26:55.581 "vs": { 00:26:55.581 "nvme_version": "1.4" 00:26:55.581 }, 00:26:55.581 "ns_data": { 00:26:55.581 "id": 1, 00:26:55.581 "can_share": false 00:26:55.581 } 00:26:55.581 } 00:26:55.581 ], 00:26:55.581 "mp_policy": "active_passive" 00:26:55.581 } 00:26:55.581 } 00:26:55.581 ]' 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:55.581 17:17:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:55.839 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=82dfaf6f-9174-4fd9-a39d-971ebfe53599 00:26:55.839 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:55.839 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82dfaf6f-9174-4fd9-a39d-971ebfe53599 00:26:56.098 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=bcf61d49-3d73-4b53-8fe5-639a130185c5 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bcf61d49-3d73-4b53-8fe5-639a130185c5 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:56.356 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:56.615 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:56.615 { 00:26:56.615 "name": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:56.615 "aliases": [ 00:26:56.615 "lvs/nvme0n1p0" 00:26:56.615 ], 00:26:56.615 "product_name": "Logical Volume", 00:26:56.615 "block_size": 4096, 00:26:56.615 "num_blocks": 26476544, 00:26:56.615 "uuid": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:56.615 "assigned_rate_limits": { 00:26:56.615 "rw_ios_per_sec": 0, 00:26:56.615 "rw_mbytes_per_sec": 0, 00:26:56.615 "r_mbytes_per_sec": 0, 00:26:56.615 "w_mbytes_per_sec": 0 00:26:56.616 }, 00:26:56.616 "claimed": false, 00:26:56.616 "zoned": false, 00:26:56.616 "supported_io_types": { 00:26:56.616 "read": true, 00:26:56.616 "write": true, 00:26:56.616 "unmap": true, 00:26:56.616 "flush": false, 00:26:56.616 "reset": true, 00:26:56.616 "nvme_admin": false, 00:26:56.616 "nvme_io": false, 00:26:56.616 "nvme_io_md": false, 00:26:56.616 "write_zeroes": true, 00:26:56.616 "zcopy": false, 00:26:56.616 "get_zone_info": false, 00:26:56.616 "zone_management": false, 00:26:56.616 "zone_append": false, 00:26:56.616 "compare": false, 00:26:56.616 "compare_and_write": false, 00:26:56.616 "abort": false, 00:26:56.616 "seek_hole": true, 00:26:56.616 "seek_data": true, 00:26:56.616 "copy": false, 00:26:56.616 "nvme_iov_md": false 00:26:56.616 }, 00:26:56.616 "driver_specific": { 00:26:56.616 "lvol": { 00:26:56.616 "lvol_store_uuid": "bcf61d49-3d73-4b53-8fe5-639a130185c5", 00:26:56.616 "base_bdev": "nvme0n1", 00:26:56.616 "thin_provision": true, 00:26:56.616 "num_allocated_clusters": 0, 00:26:56.616 "snapshot": false, 00:26:56.616 "clone": false, 00:26:56.616 "esnap_clone": false 00:26:56.616 } 00:26:56.616 } 00:26:56.616 } 00:26:56.616 ]' 00:26:56.616 17:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:56.616 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:56.874 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:57.132 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:57.132 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:57.132 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.132 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.133 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:57.133 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:57.133 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:57.133 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:57.392 { 00:26:57.392 "name": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:57.392 "aliases": [ 00:26:57.392 "lvs/nvme0n1p0" 00:26:57.392 ], 00:26:57.392 "product_name": "Logical Volume", 00:26:57.392 "block_size": 4096, 00:26:57.392 "num_blocks": 26476544, 00:26:57.392 "uuid": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:57.392 "assigned_rate_limits": { 00:26:57.392 "rw_ios_per_sec": 0, 00:26:57.392 "rw_mbytes_per_sec": 0, 00:26:57.392 "r_mbytes_per_sec": 0, 00:26:57.392 "w_mbytes_per_sec": 0 00:26:57.392 }, 00:26:57.392 "claimed": false, 00:26:57.392 "zoned": false, 00:26:57.392 "supported_io_types": { 00:26:57.392 "read": true, 00:26:57.392 "write": true, 00:26:57.392 "unmap": true, 00:26:57.392 "flush": false, 00:26:57.392 "reset": true, 00:26:57.392 "nvme_admin": false, 00:26:57.392 "nvme_io": false, 00:26:57.392 "nvme_io_md": false, 00:26:57.392 "write_zeroes": true, 00:26:57.392 "zcopy": false, 00:26:57.392 "get_zone_info": false, 00:26:57.392 "zone_management": false, 00:26:57.392 "zone_append": false, 00:26:57.392 "compare": false, 00:26:57.392 "compare_and_write": false, 00:26:57.392 "abort": false, 00:26:57.392 "seek_hole": true, 00:26:57.392 "seek_data": true, 00:26:57.392 "copy": false, 00:26:57.392 "nvme_iov_md": false 00:26:57.392 }, 00:26:57.392 "driver_specific": { 00:26:57.392 "lvol": { 00:26:57.392 "lvol_store_uuid": "bcf61d49-3d73-4b53-8fe5-639a130185c5", 00:26:57.392 "base_bdev": "nvme0n1", 00:26:57.392 "thin_provision": true, 00:26:57.392 "num_allocated_clusters": 0, 00:26:57.392 "snapshot": false, 00:26:57.392 "clone": false, 00:26:57.392 "esnap_clone": false 00:26:57.392 } 00:26:57.392 } 00:26:57.392 } 00:26:57.392 ]' 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:57.392 17:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:57.651 17:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40efaee8-7f55-4452-8f8b-f93da8ff8d51 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:57.909 { 00:26:57.909 "name": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:57.909 "aliases": [ 00:26:57.909 "lvs/nvme0n1p0" 00:26:57.909 ], 00:26:57.909 "product_name": "Logical Volume", 00:26:57.909 "block_size": 4096, 00:26:57.909 "num_blocks": 26476544, 00:26:57.909 "uuid": "40efaee8-7f55-4452-8f8b-f93da8ff8d51", 00:26:57.909 "assigned_rate_limits": { 00:26:57.909 "rw_ios_per_sec": 0, 00:26:57.909 "rw_mbytes_per_sec": 0, 00:26:57.909 "r_mbytes_per_sec": 0, 00:26:57.909 "w_mbytes_per_sec": 0 00:26:57.909 }, 00:26:57.909 "claimed": false, 00:26:57.909 "zoned": false, 00:26:57.909 "supported_io_types": { 00:26:57.909 "read": true, 00:26:57.909 "write": true, 00:26:57.909 "unmap": true, 00:26:57.909 "flush": false, 00:26:57.909 "reset": true, 00:26:57.909 "nvme_admin": false, 00:26:57.909 "nvme_io": false, 00:26:57.909 "nvme_io_md": false, 00:26:57.909 "write_zeroes": true, 00:26:57.909 "zcopy": false, 00:26:57.909 "get_zone_info": false, 00:26:57.909 "zone_management": false, 00:26:57.909 "zone_append": false, 00:26:57.909 "compare": false, 00:26:57.909 "compare_and_write": false, 00:26:57.909 "abort": false, 00:26:57.909 "seek_hole": true, 00:26:57.909 "seek_data": true, 00:26:57.909 "copy": false, 00:26:57.909 "nvme_iov_md": false 00:26:57.909 }, 00:26:57.909 "driver_specific": { 00:26:57.909 "lvol": { 00:26:57.909 "lvol_store_uuid": "bcf61d49-3d73-4b53-8fe5-639a130185c5", 00:26:57.909 "base_bdev": "nvme0n1", 00:26:57.909 "thin_provision": true, 00:26:57.909 "num_allocated_clusters": 0, 00:26:57.909 "snapshot": false, 00:26:57.909 "clone": false, 00:26:57.909 "esnap_clone": false 00:26:57.909 } 00:26:57.909 } 00:26:57.909 } 00:26:57.909 ]' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 40efaee8-7f55-4452-8f8b-f93da8ff8d51 --l2p_dram_limit 10' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:57.909 17:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 40efaee8-7f55-4452-8f8b-f93da8ff8d51 --l2p_dram_limit 10 -c nvc0n1p0 00:26:58.168 [2024-07-25 17:17:50.436861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.436917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:58.168 [2024-07-25 17:17:50.436937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:58.168 [2024-07-25 17:17:50.436950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.437062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.437085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:58.168 [2024-07-25 17:17:50.437098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:58.168 [2024-07-25 17:17:50.437126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.437154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:58.168 [2024-07-25 17:17:50.438125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:58.168 [2024-07-25 17:17:50.438153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.438171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:58.168 [2024-07-25 17:17:50.438183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:26:58.168 [2024-07-25 17:17:50.438196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.438329] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f9fc5255-4196-4a04-b04b-38d96588e30f 00:26:58.168 [2024-07-25 17:17:50.440728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.440763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:58.168 [2024-07-25 17:17:50.440782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:58.168 [2024-07-25 17:17:50.440792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.453634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.453676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:58.168 [2024-07-25 17:17:50.453694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:26:58.168 [2024-07-25 17:17:50.453705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.453814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.453832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:58.168 [2024-07-25 17:17:50.453846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:58.168 [2024-07-25 17:17:50.453856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.453933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.453949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:58.168 [2024-07-25 17:17:50.453967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:58.168 [2024-07-25 17:17:50.454016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.168 [2024-07-25 17:17:50.454054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:58.168 [2024-07-25 17:17:50.458874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.168 [2024-07-25 17:17:50.458917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:58.168 [2024-07-25 17:17:50.458947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:26:58.169 [2024-07-25 17:17:50.458959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.169 [2024-07-25 17:17:50.459032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.169 [2024-07-25 17:17:50.459053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:58.169 [2024-07-25 17:17:50.459065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:58.169 [2024-07-25 17:17:50.459082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.169 [2024-07-25 17:17:50.459138] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:58.169 [2024-07-25 17:17:50.459297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:58.169 [2024-07-25 17:17:50.459316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:58.169 [2024-07-25 17:17:50.459336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:58.169 [2024-07-25 17:17:50.459350] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:58.169 [2024-07-25 17:17:50.459364] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:58.169 [2024-07-25 17:17:50.459391] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:58.169 [2024-07-25 17:17:50.459439] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:58.169 [2024-07-25 17:17:50.459465] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:58.169 [2024-07-25 17:17:50.459478] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:58.169 [2024-07-25 17:17:50.459490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.169 [2024-07-25 17:17:50.459503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:58.169 [2024-07-25 17:17:50.459515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:26:58.169 [2024-07-25 17:17:50.459528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.169 [2024-07-25 17:17:50.459619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.169 [2024-07-25 17:17:50.459639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:58.169 [2024-07-25 17:17:50.459651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:58.169 [2024-07-25 17:17:50.459668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.169 [2024-07-25 17:17:50.459770] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:58.169 [2024-07-25 17:17:50.459798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:58.169 [2024-07-25 17:17:50.459821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:58.169 [2024-07-25 17:17:50.459836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.459847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:58.169 [2024-07-25 17:17:50.459860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.459871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:58.169 [2024-07-25 17:17:50.459884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:58.169 [2024-07-25 17:17:50.459894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:58.169 [2024-07-25 17:17:50.459906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:58.169 [2024-07-25 17:17:50.459916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:58.169 [2024-07-25 17:17:50.459930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:58.169 [2024-07-25 17:17:50.459941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:58.169 [2024-07-25 17:17:50.459953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:58.169 [2024-07-25 17:17:50.459963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:58.169 [2024-07-25 17:17:50.459989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:58.169 [2024-07-25 17:17:50.460019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:58.169 [2024-07-25 17:17:50.460068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:58.169 [2024-07-25 17:17:50.460103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:58.169 [2024-07-25 17:17:50.460136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:58.169 [2024-07-25 17:17:50.460187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:58.169 [2024-07-25 17:17:50.460219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:58.169 [2024-07-25 17:17:50.460244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:58.169 [2024-07-25 17:17:50.460256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:58.169 [2024-07-25 17:17:50.460266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:58.169 [2024-07-25 17:17:50.460280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:58.169 [2024-07-25 17:17:50.460290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:58.169 [2024-07-25 17:17:50.460302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:58.169 [2024-07-25 17:17:50.460324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:58.169 [2024-07-25 17:17:50.460334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460345] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:58.169 [2024-07-25 17:17:50.460357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:58.169 [2024-07-25 17:17:50.460369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:58.169 [2024-07-25 17:17:50.460406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:58.169 [2024-07-25 17:17:50.460416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:58.169 [2024-07-25 17:17:50.460430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:58.169 [2024-07-25 17:17:50.460440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:58.169 [2024-07-25 17:17:50.460451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:58.169 [2024-07-25 17:17:50.460462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:58.169 [2024-07-25 17:17:50.460479] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:58.169 [2024-07-25 17:17:50.460494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:58.169 [2024-07-25 17:17:50.460520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:58.169 [2024-07-25 17:17:50.460532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:58.169 [2024-07-25 17:17:50.460543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:58.169 [2024-07-25 17:17:50.460556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:58.169 [2024-07-25 17:17:50.460566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:58.169 [2024-07-25 17:17:50.460580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:58.169 [2024-07-25 17:17:50.460591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:58.169 [2024-07-25 17:17:50.460603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:58.169 [2024-07-25 17:17:50.460614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:58.169 [2024-07-25 17:17:50.460677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:58.169 [2024-07-25 17:17:50.460689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:58.169 [2024-07-25 17:17:50.460713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:58.169 [2024-07-25 17:17:50.460725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:58.169 [2024-07-25 17:17:50.460737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:58.170 [2024-07-25 17:17:50.460750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.170 [2024-07-25 17:17:50.460761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:58.170 [2024-07-25 17:17:50.460774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:26:58.170 [2024-07-25 17:17:50.460785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.170 [2024-07-25 17:17:50.460839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:58.170 [2024-07-25 17:17:50.460854] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:02.361 [2024-07-25 17:17:54.553660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.554009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:02.361 [2024-07-25 17:17:54.554152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4092.837 ms 00:27:02.361 [2024-07-25 17:17:54.554326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.593858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.594242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:02.361 [2024-07-25 17:17:54.594380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.092 ms 00:27:02.361 [2024-07-25 17:17:54.594512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.594806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.594876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:02.361 [2024-07-25 17:17:54.595040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:02.361 [2024-07-25 17:17:54.595066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.637866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.637926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:02.361 [2024-07-25 17:17:54.637951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.693 ms 00:27:02.361 [2024-07-25 17:17:54.637991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.638076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.638103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:02.361 [2024-07-25 17:17:54.638127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:02.361 [2024-07-25 17:17:54.638140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.639004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.639054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:02.361 [2024-07-25 17:17:54.639075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:27:02.361 [2024-07-25 17:17:54.639088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.639280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.639302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:02.361 [2024-07-25 17:17:54.639318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:27:02.361 [2024-07-25 17:17:54.639331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.661861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.661907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:02.361 [2024-07-25 17:17:54.661931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:27:02.361 [2024-07-25 17:17:54.661944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.361 [2024-07-25 17:17:54.677689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:02.361 [2024-07-25 17:17:54.682986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.361 [2024-07-25 17:17:54.683057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:02.361 [2024-07-25 17:17:54.683077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.911 ms 00:27:02.361 [2024-07-25 17:17:54.683093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.619 [2024-07-25 17:17:54.829031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.619 [2024-07-25 17:17:54.829155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:02.619 [2024-07-25 17:17:54.829182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 145.865 ms 00:27:02.619 [2024-07-25 17:17:54.829200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.619 [2024-07-25 17:17:54.829480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:54.829505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:02.620 [2024-07-25 17:17:54.829520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:27:02.620 [2024-07-25 17:17:54.829538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:54.859811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:54.859863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:02.620 [2024-07-25 17:17:54.859884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.184 ms 00:27:02.620 [2024-07-25 17:17:54.859948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:54.888369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:54.888417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:02.620 [2024-07-25 17:17:54.888437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.384 ms 00:27:02.620 [2024-07-25 17:17:54.888451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:54.889361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:54.889400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:02.620 [2024-07-25 17:17:54.889419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:27:02.620 [2024-07-25 17:17:54.889434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:54.989542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:54.989632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:02.620 [2024-07-25 17:17:54.989657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.042 ms 00:27:02.620 [2024-07-25 17:17:54.989678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.022604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:55.022715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:02.620 [2024-07-25 17:17:55.022739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.869 ms 00:27:02.620 [2024-07-25 17:17:55.022756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.052569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:55.052646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:02.620 [2024-07-25 17:17:55.052666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.756 ms 00:27:02.620 [2024-07-25 17:17:55.052680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.083066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:55.083121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:02.620 [2024-07-25 17:17:55.083142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.333 ms 00:27:02.620 [2024-07-25 17:17:55.083173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.083250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:55.083275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:02.620 [2024-07-25 17:17:55.083292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:02.620 [2024-07-25 17:17:55.083312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.083446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.620 [2024-07-25 17:17:55.083476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:02.620 [2024-07-25 17:17:55.083490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:02.620 [2024-07-25 17:17:55.083506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.620 [2024-07-25 17:17:55.085188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4647.605 ms, result 0 00:27:02.878 { 00:27:02.878 "name": "ftl0", 00:27:02.878 "uuid": "f9fc5255-4196-4a04-b04b-38d96588e30f" 00:27:02.878 } 00:27:02.878 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:02.878 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:02.878 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:02.878 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:02.878 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:03.137 /dev/nbd0 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:03.137 1+0 records in 00:27:03.137 1+0 records out 00:27:03.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256185 s, 16.0 MB/s 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:27:03.137 17:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:03.396 [2024-07-25 17:17:55.684027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:03.396 [2024-07-25 17:17:55.684186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82819 ] 00:27:03.396 [2024-07-25 17:17:55.858313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.654 [2024-07-25 17:17:56.114382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.187  Copying: 211/1024 [MB] (211 MBps) Copying: 416/1024 [MB] (205 MBps) Copying: 627/1024 [MB] (211 MBps) Copying: 817/1024 [MB] (190 MBps) Copying: 1008/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 201 MBps) 00:27:10.187 00:27:10.187 17:18:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:12.112 17:18:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:12.112 [2024-07-25 17:18:04.560901] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:12.112 [2024-07-25 17:18:04.561638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82916 ] 00:27:12.370 [2024-07-25 17:18:04.722538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.629 [2024-07-25 17:18:04.965064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.700  Copying: 11/1024 [MB] (11 MBps) Copying: 23/1024 [MB] (11 MBps) Copying: 35/1024 [MB] (12 MBps) Copying: 47/1024 [MB] (11 MBps) Copying: 60/1024 [MB] (13 MBps) Copying: 74/1024 [MB] (14 MBps) Copying: 88/1024 [MB] (13 MBps) Copying: 101/1024 [MB] (13 MBps) Copying: 115/1024 [MB] (13 MBps) Copying: 129/1024 [MB] (13 MBps) Copying: 142/1024 [MB] (13 MBps) Copying: 156/1024 [MB] (13 MBps) Copying: 170/1024 [MB] (13 MBps) Copying: 184/1024 [MB] (14 MBps) Copying: 196/1024 [MB] (12 MBps) Copying: 208/1024 [MB] (11 MBps) Copying: 219/1024 [MB] (11 MBps) Copying: 231/1024 [MB] (11 MBps) Copying: 243/1024 [MB] (11 MBps) Copying: 255/1024 [MB] (11 MBps) Copying: 267/1024 [MB] (11 MBps) Copying: 279/1024 [MB] (11 MBps) Copying: 291/1024 [MB] (11 MBps) Copying: 303/1024 [MB] (11 MBps) Copying: 314/1024 [MB] (11 MBps) Copying: 327/1024 [MB] (12 MBps) Copying: 338/1024 [MB] (11 MBps) Copying: 350/1024 [MB] (11 MBps) Copying: 362/1024 [MB] (11 MBps) Copying: 374/1024 [MB] (11 MBps) Copying: 386/1024 [MB] (11 MBps) Copying: 398/1024 [MB] (11 MBps) Copying: 409/1024 [MB] (11 MBps) Copying: 421/1024 [MB] (11 MBps) Copying: 433/1024 [MB] (11 MBps) Copying: 445/1024 [MB] (12 MBps) Copying: 457/1024 [MB] (11 MBps) Copying: 470/1024 [MB] (13 MBps) Copying: 482/1024 [MB] (12 MBps) Copying: 494/1024 [MB] (11 MBps) Copying: 506/1024 [MB] (11 MBps) Copying: 518/1024 [MB] (11 MBps) Copying: 530/1024 [MB] (12 MBps) Copying: 542/1024 [MB] (12 MBps) Copying: 554/1024 [MB] (12 MBps) Copying: 566/1024 [MB] (11 MBps) Copying: 578/1024 [MB] (11 MBps) Copying: 590/1024 [MB] (11 MBps) Copying: 602/1024 [MB] (11 MBps) Copying: 614/1024 [MB] (11 MBps) Copying: 626/1024 [MB] (12 MBps) Copying: 638/1024 [MB] (11 MBps) Copying: 650/1024 [MB] (11 MBps) Copying: 662/1024 [MB] (11 MBps) Copying: 674/1024 [MB] (12 MBps) Copying: 686/1024 [MB] (12 MBps) Copying: 698/1024 [MB] (12 MBps) Copying: 710/1024 [MB] (11 MBps) Copying: 722/1024 [MB] (11 MBps) Copying: 734/1024 [MB] (12 MBps) Copying: 746/1024 [MB] (11 MBps) Copying: 757/1024 [MB] (11 MBps) Copying: 769/1024 [MB] (11 MBps) Copying: 781/1024 [MB] (11 MBps) Copying: 793/1024 [MB] (11 MBps) Copying: 805/1024 [MB] (12 MBps) Copying: 817/1024 [MB] (12 MBps) Copying: 829/1024 [MB] (11 MBps) Copying: 841/1024 [MB] (11 MBps) Copying: 852/1024 [MB] (11 MBps) Copying: 864/1024 [MB] (11 MBps) Copying: 876/1024 [MB] (11 MBps) Copying: 888/1024 [MB] (11 MBps) Copying: 900/1024 [MB] (12 MBps) Copying: 912/1024 [MB] (11 MBps) Copying: 924/1024 [MB] (12 MBps) Copying: 936/1024 [MB] (11 MBps) Copying: 948/1024 [MB] (11 MBps) Copying: 960/1024 [MB] (11 MBps) Copying: 972/1024 [MB] (11 MBps) Copying: 983/1024 [MB] (11 MBps) Copying: 995/1024 [MB] (11 MBps) Copying: 1007/1024 [MB] (12 MBps) Copying: 1019/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 12 MBps) 00:28:38.700 00:28:38.700 17:19:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:38.700 17:19:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:38.700 17:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:38.958 [2024-07-25 17:19:31.312674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.312746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.958 [2024-07-25 17:19:31.312792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.958 [2024-07-25 17:19:31.312806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.312848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.958 [2024-07-25 17:19:31.316230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.316273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.958 [2024-07-25 17:19:31.316292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:28:38.958 [2024-07-25 17:19:31.316308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.318390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.318445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.958 [2024-07-25 17:19:31.318467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:28:38.958 [2024-07-25 17:19:31.318488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.335195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.335251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.958 [2024-07-25 17:19:31.335272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.679 ms 00:28:38.958 [2024-07-25 17:19:31.335289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.340984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.341029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.958 [2024-07-25 17:19:31.341047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:28:38.958 [2024-07-25 17:19:31.341067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.368437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.368488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.958 [2024-07-25 17:19:31.368507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.272 ms 00:28:38.958 [2024-07-25 17:19:31.368522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.384753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.384812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.958 [2024-07-25 17:19:31.384832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.184 ms 00:28:38.958 [2024-07-25 17:19:31.384847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.385065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.385111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.958 [2024-07-25 17:19:31.385128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:28:38.958 [2024-07-25 17:19:31.385143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.958 [2024-07-25 17:19:31.410343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.958 [2024-07-25 17:19:31.410393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:38.958 [2024-07-25 17:19:31.410411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.172 ms 00:28:38.958 [2024-07-25 17:19:31.410426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.217 [2024-07-25 17:19:31.435132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.217 [2024-07-25 17:19:31.435183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:39.217 [2024-07-25 17:19:31.435202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.663 ms 00:28:39.217 [2024-07-25 17:19:31.435217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.217 [2024-07-25 17:19:31.460538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.217 [2024-07-25 17:19:31.460591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:39.217 [2024-07-25 17:19:31.460610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.275 ms 00:28:39.217 [2024-07-25 17:19:31.460624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.217 [2024-07-25 17:19:31.485564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.217 [2024-07-25 17:19:31.485614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:39.217 [2024-07-25 17:19:31.485633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:28:39.217 [2024-07-25 17:19:31.485648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.217 [2024-07-25 17:19:31.485693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:39.217 [2024-07-25 17:19:31.485723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:39.217 [2024-07-25 17:19:31.485946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.485958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.485972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.486954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:39.218 [2024-07-25 17:19:31.487451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:39.218 [2024-07-25 17:19:31.487465] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9fc5255-4196-4a04-b04b-38d96588e30f 00:28:39.219 [2024-07-25 17:19:31.487486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:39.219 [2024-07-25 17:19:31.487512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:39.219 [2024-07-25 17:19:31.487530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:39.219 [2024-07-25 17:19:31.487544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:39.219 [2024-07-25 17:19:31.487559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:39.219 [2024-07-25 17:19:31.487572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:39.219 [2024-07-25 17:19:31.487587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:39.219 [2024-07-25 17:19:31.487599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:39.219 [2024-07-25 17:19:31.487613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:39.219 [2024-07-25 17:19:31.487625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.219 [2024-07-25 17:19:31.487641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:39.219 [2024-07-25 17:19:31.487655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:28:39.219 [2024-07-25 17:19:31.487671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.502455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.219 [2024-07-25 17:19:31.502500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:39.219 [2024-07-25 17:19:31.502518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.733 ms 00:28:39.219 [2024-07-25 17:19:31.502533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.502950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.219 [2024-07-25 17:19:31.503131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:39.219 [2024-07-25 17:19:31.503155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:28:39.219 [2024-07-25 17:19:31.503205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.547093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.219 [2024-07-25 17:19:31.547147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:39.219 [2024-07-25 17:19:31.547168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.219 [2024-07-25 17:19:31.547184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.547250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.219 [2024-07-25 17:19:31.547273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:39.219 [2024-07-25 17:19:31.547286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.219 [2024-07-25 17:19:31.547301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.547417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.219 [2024-07-25 17:19:31.547443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:39.219 [2024-07-25 17:19:31.547457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.219 [2024-07-25 17:19:31.547471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.547500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.219 [2024-07-25 17:19:31.547532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:39.219 [2024-07-25 17:19:31.547545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.219 [2024-07-25 17:19:31.547559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.219 [2024-07-25 17:19:31.630785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.219 [2024-07-25 17:19:31.630868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:39.219 [2024-07-25 17:19:31.630891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.219 [2024-07-25 17:19:31.630908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.703541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.703632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:39.477 [2024-07-25 17:19:31.703654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.703681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.703828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.703861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.477 [2024-07-25 17:19:31.703876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.703891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.703965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.704057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.477 [2024-07-25 17:19:31.704075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.704107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.704252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.704285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.477 [2024-07-25 17:19:31.704300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.704316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.704410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.704469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:39.477 [2024-07-25 17:19:31.704484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.704500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.704562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.704594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.477 [2024-07-25 17:19:31.704613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.704629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.704693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:39.477 [2024-07-25 17:19:31.704730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.477 [2024-07-25 17:19:31.704747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:39.477 [2024-07-25 17:19:31.704763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.477 [2024-07-25 17:19:31.704961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.239 ms, result 0 00:28:39.477 true 00:28:39.477 17:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82671 00:28:39.477 17:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82671 00:28:39.477 17:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:39.477 [2024-07-25 17:19:31.843924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:39.478 [2024-07-25 17:19:31.844434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83772 ] 00:28:39.735 [2024-07-25 17:19:32.016706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.994 [2024-07-25 17:19:32.242422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.286  Copying: 202/1024 [MB] (202 MBps) Copying: 407/1024 [MB] (204 MBps) Copying: 609/1024 [MB] (202 MBps) Copying: 803/1024 [MB] (193 MBps) Copying: 1000/1024 [MB] (197 MBps) Copying: 1024/1024 [MB] (average 199 MBps) 00:28:46.286 00:28:46.286 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82671 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:46.286 17:19:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:46.544 [2024-07-25 17:19:38.764663] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:46.544 [2024-07-25 17:19:38.764845] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83847 ] 00:28:46.544 [2024-07-25 17:19:38.933588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.802 [2024-07-25 17:19:39.121758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.061 [2024-07-25 17:19:39.424853] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.061 [2024-07-25 17:19:39.424936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.061 [2024-07-25 17:19:39.491184] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:47.061 [2024-07-25 17:19:39.491649] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:47.061 [2024-07-25 17:19:39.491994] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:47.319 [2024-07-25 17:19:39.780637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.319 [2024-07-25 17:19:39.780696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.319 [2024-07-25 17:19:39.780719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:47.320 [2024-07-25 17:19:39.780731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.320 [2024-07-25 17:19:39.780801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.320 [2024-07-25 17:19:39.780825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.320 [2024-07-25 17:19:39.780839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:47.320 [2024-07-25 17:19:39.780851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.320 [2024-07-25 17:19:39.780883] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.320 [2024-07-25 17:19:39.781837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.320 [2024-07-25 17:19:39.781871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.320 [2024-07-25 17:19:39.781885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.320 [2024-07-25 17:19:39.781899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:28:47.320 [2024-07-25 17:19:39.781910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.320 [2024-07-25 17:19:39.784232] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.580 [2024-07-25 17:19:39.799212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.799270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.580 [2024-07-25 17:19:39.799297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.981 ms 00:28:47.580 [2024-07-25 17:19:39.799309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.799378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.799399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.580 [2024-07-25 17:19:39.799412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:47.580 [2024-07-25 17:19:39.799440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.808744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.808780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.580 [2024-07-25 17:19:39.808795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.189 ms 00:28:47.580 [2024-07-25 17:19:39.808806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.808897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.808917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.580 [2024-07-25 17:19:39.808930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:47.580 [2024-07-25 17:19:39.808940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.809052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.809074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.580 [2024-07-25 17:19:39.809093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:47.580 [2024-07-25 17:19:39.809105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.809145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.580 [2024-07-25 17:19:39.813575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.813609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.580 [2024-07-25 17:19:39.813623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:28:47.580 [2024-07-25 17:19:39.813634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.813681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.813699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.580 [2024-07-25 17:19:39.813711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:47.580 [2024-07-25 17:19:39.813721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.813783] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.580 [2024-07-25 17:19:39.813868] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.580 [2024-07-25 17:19:39.813923] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.580 [2024-07-25 17:19:39.813944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:47.580 [2024-07-25 17:19:39.814067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.580 [2024-07-25 17:19:39.814095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.580 [2024-07-25 17:19:39.814111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:47.580 [2024-07-25 17:19:39.814126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.580 [2024-07-25 17:19:39.814139] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.580 [2024-07-25 17:19:39.814158] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.580 [2024-07-25 17:19:39.814170] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.580 [2024-07-25 17:19:39.814182] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.580 [2024-07-25 17:19:39.814193] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.580 [2024-07-25 17:19:39.814206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.814235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.580 [2024-07-25 17:19:39.814247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:28:47.580 [2024-07-25 17:19:39.814258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.814363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.580 [2024-07-25 17:19:39.814397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.580 [2024-07-25 17:19:39.814416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:47.580 [2024-07-25 17:19:39.814427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.580 [2024-07-25 17:19:39.814526] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.580 [2024-07-25 17:19:39.814544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.580 [2024-07-25 17:19:39.814557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.580 [2024-07-25 17:19:39.814569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.580 [2024-07-25 17:19:39.814582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.580 [2024-07-25 17:19:39.814593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.580 [2024-07-25 17:19:39.814606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.580 [2024-07-25 17:19:39.814617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.580 [2024-07-25 17:19:39.814627] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.580 [2024-07-25 17:19:39.814637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.580 [2024-07-25 17:19:39.814684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.580 [2024-07-25 17:19:39.814696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.581 [2024-07-25 17:19:39.814707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.581 [2024-07-25 17:19:39.814719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.581 [2024-07-25 17:19:39.814730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.581 [2024-07-25 17:19:39.814741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.581 [2024-07-25 17:19:39.814781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.581 [2024-07-25 17:19:39.814792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.581 [2024-07-25 17:19:39.814813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.581 [2024-07-25 17:19:39.814835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.581 [2024-07-25 17:19:39.814845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.581 [2024-07-25 17:19:39.814867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.581 [2024-07-25 17:19:39.814878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.581 [2024-07-25 17:19:39.814899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.581 [2024-07-25 17:19:39.814909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.581 [2024-07-25 17:19:39.814931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.581 [2024-07-25 17:19:39.814942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.581 [2024-07-25 17:19:39.814952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.581 [2024-07-25 17:19:39.814963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.581 [2024-07-25 17:19:39.814974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.581 [2024-07-25 17:19:39.815016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.581 [2024-07-25 17:19:39.815046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.581 [2024-07-25 17:19:39.815058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.581 [2024-07-25 17:19:39.815069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.581 [2024-07-25 17:19:39.815079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.581 [2024-07-25 17:19:39.815089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.581 [2024-07-25 17:19:39.815116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.581 [2024-07-25 17:19:39.815127] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.581 [2024-07-25 17:19:39.815139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.581 [2024-07-25 17:19:39.815150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.581 [2024-07-25 17:19:39.815162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.581 [2024-07-25 17:19:39.815181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.581 [2024-07-25 17:19:39.815192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.581 [2024-07-25 17:19:39.815203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.581 [2024-07-25 17:19:39.815214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.581 [2024-07-25 17:19:39.815225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.581 [2024-07-25 17:19:39.815235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.581 [2024-07-25 17:19:39.815248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.581 [2024-07-25 17:19:39.815262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.581 [2024-07-25 17:19:39.815287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.581 [2024-07-25 17:19:39.815299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.581 [2024-07-25 17:19:39.815310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.581 [2024-07-25 17:19:39.815322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.581 [2024-07-25 17:19:39.815333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.581 [2024-07-25 17:19:39.815345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.581 [2024-07-25 17:19:39.815372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.581 [2024-07-25 17:19:39.815384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.581 [2024-07-25 17:19:39.815396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.581 [2024-07-25 17:19:39.815459] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.581 [2024-07-25 17:19:39.815471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.581 [2024-07-25 17:19:39.815495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.581 [2024-07-25 17:19:39.815506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.581 [2024-07-25 17:19:39.815518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.581 [2024-07-25 17:19:39.815530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.581 [2024-07-25 17:19:39.815542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.581 [2024-07-25 17:19:39.815554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:28:47.581 [2024-07-25 17:19:39.815564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.581 [2024-07-25 17:19:39.862261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.581 [2024-07-25 17:19:39.862333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.581 [2024-07-25 17:19:39.862353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.630 ms 00:28:47.581 [2024-07-25 17:19:39.862366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.581 [2024-07-25 17:19:39.862498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.581 [2024-07-25 17:19:39.862517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:47.581 [2024-07-25 17:19:39.862536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:47.581 [2024-07-25 17:19:39.862547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.581 [2024-07-25 17:19:39.900218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.581 [2024-07-25 17:19:39.900275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:47.581 [2024-07-25 17:19:39.900295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.578 ms 00:28:47.581 [2024-07-25 17:19:39.900307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.581 [2024-07-25 17:19:39.900371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.900405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:47.582 [2024-07-25 17:19:39.900419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:47.582 [2024-07-25 17:19:39.900431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.901218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.901240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:47.582 [2024-07-25 17:19:39.901254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:28:47.582 [2024-07-25 17:19:39.901266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.901466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.901487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:47.582 [2024-07-25 17:19:39.901516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:28:47.582 [2024-07-25 17:19:39.901527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.917907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.917948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:47.582 [2024-07-25 17:19:39.917964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.351 ms 00:28:47.582 [2024-07-25 17:19:39.917986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.932414] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:47.582 [2024-07-25 17:19:39.932456] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:47.582 [2024-07-25 17:19:39.932475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.932487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:47.582 [2024-07-25 17:19:39.932500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.340 ms 00:28:47.582 [2024-07-25 17:19:39.932526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.956394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.956434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:47.582 [2024-07-25 17:19:39.956451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:28:47.582 [2024-07-25 17:19:39.956462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.969206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.969244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:47.582 [2024-07-25 17:19:39.969260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:28:47.582 [2024-07-25 17:19:39.969271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.981759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.981798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:47.582 [2024-07-25 17:19:39.981812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.446 ms 00:28:47.582 [2024-07-25 17:19:39.981823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.582 [2024-07-25 17:19:39.982620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.582 [2024-07-25 17:19:39.982675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:47.582 [2024-07-25 17:19:39.982693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:28:47.582 [2024-07-25 17:19:39.982705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.841 [2024-07-25 17:19:40.057040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.841 [2024-07-25 17:19:40.057122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:47.841 [2024-07-25 17:19:40.057145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.305 ms 00:28:47.841 [2024-07-25 17:19:40.057158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.841 [2024-07-25 17:19:40.069382] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:47.841 [2024-07-25 17:19:40.073707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.841 [2024-07-25 17:19:40.073743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:47.841 [2024-07-25 17:19:40.073760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.454 ms 00:28:47.841 [2024-07-25 17:19:40.073773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.841 [2024-07-25 17:19:40.073888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.073913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:47.842 [2024-07-25 17:19:40.073928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:47.842 [2024-07-25 17:19:40.073940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.074089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.074111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:47.842 [2024-07-25 17:19:40.074125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:47.842 [2024-07-25 17:19:40.074137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.074191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.074208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:47.842 [2024-07-25 17:19:40.074229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:47.842 [2024-07-25 17:19:40.074242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.074305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:47.842 [2024-07-25 17:19:40.074327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.074339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:47.842 [2024-07-25 17:19:40.074353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:47.842 [2024-07-25 17:19:40.074365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.101274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.101319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:47.842 [2024-07-25 17:19:40.101335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.845 ms 00:28:47.842 [2024-07-25 17:19:40.101347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.101438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.842 [2024-07-25 17:19:40.101476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:47.842 [2024-07-25 17:19:40.101490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:47.842 [2024-07-25 17:19:40.101501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.842 [2024-07-25 17:19:40.103261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.897 ms, result 0 00:29:31.164  Copying: 21/1024 [MB] (21 MBps) Copying: 46/1024 [MB] (24 MBps) Copying: 70/1024 [MB] (24 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 190/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 238/1024 [MB] (23 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 287/1024 [MB] (24 MBps) Copying: 311/1024 [MB] (24 MBps) Copying: 336/1024 [MB] (24 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 385/1024 [MB] (24 MBps) Copying: 409/1024 [MB] (24 MBps) Copying: 434/1024 [MB] (24 MBps) Copying: 458/1024 [MB] (24 MBps) Copying: 483/1024 [MB] (24 MBps) Copying: 507/1024 [MB] (24 MBps) Copying: 531/1024 [MB] (24 MBps) Copying: 555/1024 [MB] (23 MBps) Copying: 579/1024 [MB] (23 MBps) Copying: 604/1024 [MB] (24 MBps) Copying: 628/1024 [MB] (24 MBps) Copying: 652/1024 [MB] (24 MBps) Copying: 676/1024 [MB] (24 MBps) Copying: 700/1024 [MB] (24 MBps) Copying: 725/1024 [MB] (24 MBps) Copying: 750/1024 [MB] (24 MBps) Copying: 774/1024 [MB] (24 MBps) Copying: 799/1024 [MB] (24 MBps) Copying: 823/1024 [MB] (24 MBps) Copying: 847/1024 [MB] (24 MBps) Copying: 871/1024 [MB] (24 MBps) Copying: 895/1024 [MB] (24 MBps) Copying: 920/1024 [MB] (24 MBps) Copying: 944/1024 [MB] (24 MBps) Copying: 968/1024 [MB] (24 MBps) Copying: 993/1024 [MB] (24 MBps) Copying: 1017/1024 [MB] (24 MBps) Copying: 1048276/1048576 [kB] (5888 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-25 17:20:23.501144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.164 [2024-07-25 17:20:23.501374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:31.164 [2024-07-25 17:20:23.501411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:31.164 [2024-07-25 17:20:23.501425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.164 [2024-07-25 17:20:23.502631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:31.164 [2024-07-25 17:20:23.508515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.164 [2024-07-25 17:20:23.508572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:31.164 [2024-07-25 17:20:23.508603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.816 ms 00:29:31.165 [2024-07-25 17:20:23.508613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.165 [2024-07-25 17:20:23.521419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.165 [2024-07-25 17:20:23.521483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:31.165 [2024-07-25 17:20:23.521515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.482 ms 00:29:31.165 [2024-07-25 17:20:23.521525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.165 [2024-07-25 17:20:23.543100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.165 [2024-07-25 17:20:23.543157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:31.165 [2024-07-25 17:20:23.543189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.554 ms 00:29:31.165 [2024-07-25 17:20:23.543200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.165 [2024-07-25 17:20:23.548638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.165 [2024-07-25 17:20:23.548689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:31.165 [2024-07-25 17:20:23.548724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.404 ms 00:29:31.165 [2024-07-25 17:20:23.548734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.165 [2024-07-25 17:20:23.574713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.165 [2024-07-25 17:20:23.574770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:31.165 [2024-07-25 17:20:23.574801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.921 ms 00:29:31.165 [2024-07-25 17:20:23.574818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.165 [2024-07-25 17:20:23.590189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.165 [2024-07-25 17:20:23.590244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:31.165 [2024-07-25 17:20:23.590275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.331 ms 00:29:31.165 [2024-07-25 17:20:23.590285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.711187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.424 [2024-07-25 17:20:23.711272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:31.424 [2024-07-25 17:20:23.711290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.862 ms 00:29:31.424 [2024-07-25 17:20:23.711308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.736360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.424 [2024-07-25 17:20:23.736415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:31.424 [2024-07-25 17:20:23.736445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.031 ms 00:29:31.424 [2024-07-25 17:20:23.736454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.762343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.424 [2024-07-25 17:20:23.762407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:31.424 [2024-07-25 17:20:23.762443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.851 ms 00:29:31.424 [2024-07-25 17:20:23.762452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.787063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.424 [2024-07-25 17:20:23.787125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:31.424 [2024-07-25 17:20:23.787156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.574 ms 00:29:31.424 [2024-07-25 17:20:23.787165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.811383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.424 [2024-07-25 17:20:23.811435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:31.424 [2024-07-25 17:20:23.811465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.157 ms 00:29:31.424 [2024-07-25 17:20:23.811474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.424 [2024-07-25 17:20:23.811511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:31.424 [2024-07-25 17:20:23.811531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:29:31.424 [2024-07-25 17:20:23.811545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.811990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:31.424 [2024-07-25 17:20:23.812163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:31.425 [2024-07-25 17:20:23.812693] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:31.425 [2024-07-25 17:20:23.812704] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9fc5255-4196-4a04-b04b-38d96588e30f 00:29:31.425 [2024-07-25 17:20:23.812719] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:29:31.425 [2024-07-25 17:20:23.812729] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:29:31.425 [2024-07-25 17:20:23.812742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:29:31.425 [2024-07-25 17:20:23.812753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:29:31.425 [2024-07-25 17:20:23.812763] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:31.425 [2024-07-25 17:20:23.812773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:31.425 [2024-07-25 17:20:23.812783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:31.425 [2024-07-25 17:20:23.812792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:31.425 [2024-07-25 17:20:23.812801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:31.425 [2024-07-25 17:20:23.812811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.425 [2024-07-25 17:20:23.812822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:31.425 [2024-07-25 17:20:23.812844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:29:31.425 [2024-07-25 17:20:23.812862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.827430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.425 [2024-07-25 17:20:23.827480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:31.425 [2024-07-25 17:20:23.827511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:29:31.425 [2024-07-25 17:20:23.827521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.827993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.425 [2024-07-25 17:20:23.828050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:31.425 [2024-07-25 17:20:23.828065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:29:31.425 [2024-07-25 17:20:23.828076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.860102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.425 [2024-07-25 17:20:23.860159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.425 [2024-07-25 17:20:23.860189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.425 [2024-07-25 17:20:23.860199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.860250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.425 [2024-07-25 17:20:23.860263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.425 [2024-07-25 17:20:23.860274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.425 [2024-07-25 17:20:23.860283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.860353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.425 [2024-07-25 17:20:23.860370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.425 [2024-07-25 17:20:23.860381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.425 [2024-07-25 17:20:23.860407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.425 [2024-07-25 17:20:23.860442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.425 [2024-07-25 17:20:23.860455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.425 [2024-07-25 17:20:23.860465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.425 [2024-07-25 17:20:23.860475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.684 [2024-07-25 17:20:23.941078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.684 [2024-07-25 17:20:23.941150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.684 [2024-07-25 17:20:23.941182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.684 [2024-07-25 17:20:23.941193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.684 [2024-07-25 17:20:24.010180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.685 [2024-07-25 17:20:24.010277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.685 [2024-07-25 17:20:24.010412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.685 [2024-07-25 17:20:24.010540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.685 [2024-07-25 17:20:24.010721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:31.685 [2024-07-25 17:20:24.010808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.685 [2024-07-25 17:20:24.010890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.010900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.010950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.685 [2024-07-25 17:20:24.010966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.685 [2024-07-25 17:20:24.011017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.685 [2024-07-25 17:20:24.011032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.685 [2024-07-25 17:20:24.011190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 512.953 ms, result 0 00:29:33.059 00:29:33.059 00:29:33.059 17:20:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:34.962 17:20:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:34.962 [2024-07-25 17:20:27.251741] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:34.962 [2024-07-25 17:20:27.251918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84324 ] 00:29:34.962 [2024-07-25 17:20:27.417797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.309 [2024-07-25 17:20:27.667560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.581 [2024-07-25 17:20:27.968887] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.581 [2024-07-25 17:20:27.969010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.841 [2024-07-25 17:20:28.128713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.128806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:35.841 [2024-07-25 17:20:28.128842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:35.841 [2024-07-25 17:20:28.128853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.128912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.128930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:35.841 [2024-07-25 17:20:28.128942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:35.841 [2024-07-25 17:20:28.128956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.128987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:35.841 [2024-07-25 17:20:28.129850] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:35.841 [2024-07-25 17:20:28.129907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.129922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:35.841 [2024-07-25 17:20:28.129940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:29:35.841 [2024-07-25 17:20:28.129950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.132187] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:35.841 [2024-07-25 17:20:28.146215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.146270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:35.841 [2024-07-25 17:20:28.146302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:29:35.841 [2024-07-25 17:20:28.146313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.146381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.146401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:35.841 [2024-07-25 17:20:28.146413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:35.841 [2024-07-25 17:20:28.146423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.155183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.155258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:35.841 [2024-07-25 17:20:28.155291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.687 ms 00:29:35.841 [2024-07-25 17:20:28.155301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.155446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.155464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:35.841 [2024-07-25 17:20:28.155476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:35.841 [2024-07-25 17:20:28.155487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.155557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.155588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:35.841 [2024-07-25 17:20:28.155616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:35.841 [2024-07-25 17:20:28.155626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.155675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:35.841 [2024-07-25 17:20:28.160308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.160361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:35.841 [2024-07-25 17:20:28.160408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:29:35.841 [2024-07-25 17:20:28.160419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.160468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.160484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:35.841 [2024-07-25 17:20:28.160496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:35.841 [2024-07-25 17:20:28.160506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.160566] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:35.841 [2024-07-25 17:20:28.160599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:35.841 [2024-07-25 17:20:28.160671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:35.841 [2024-07-25 17:20:28.160695] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:35.841 [2024-07-25 17:20:28.160794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:35.841 [2024-07-25 17:20:28.160809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:35.841 [2024-07-25 17:20:28.160828] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:35.841 [2024-07-25 17:20:28.160842] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:35.841 [2024-07-25 17:20:28.160855] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:35.841 [2024-07-25 17:20:28.160867] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:35.841 [2024-07-25 17:20:28.160878] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:35.841 [2024-07-25 17:20:28.160888] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:35.841 [2024-07-25 17:20:28.160899] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:35.841 [2024-07-25 17:20:28.160911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.160926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:35.841 [2024-07-25 17:20:28.160937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:29:35.841 [2024-07-25 17:20:28.160947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.161080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.841 [2024-07-25 17:20:28.161099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:35.841 [2024-07-25 17:20:28.161111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:35.841 [2024-07-25 17:20:28.161121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.841 [2024-07-25 17:20:28.161226] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:35.841 [2024-07-25 17:20:28.161243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:35.841 [2024-07-25 17:20:28.161260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.841 [2024-07-25 17:20:28.161272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.841 [2024-07-25 17:20:28.161283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:35.841 [2024-07-25 17:20:28.161293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:35.841 [2024-07-25 17:20:28.161304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:35.841 [2024-07-25 17:20:28.161314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:35.841 [2024-07-25 17:20:28.161324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:35.841 [2024-07-25 17:20:28.161333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.841 [2024-07-25 17:20:28.161345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:35.842 [2024-07-25 17:20:28.161355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:35.842 [2024-07-25 17:20:28.161365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.842 [2024-07-25 17:20:28.161375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:35.842 [2024-07-25 17:20:28.161385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:35.842 [2024-07-25 17:20:28.161395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:35.842 [2024-07-25 17:20:28.161415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:35.842 [2024-07-25 17:20:28.161473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:35.842 [2024-07-25 17:20:28.161503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:35.842 [2024-07-25 17:20:28.161532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:35.842 [2024-07-25 17:20:28.161561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:35.842 [2024-07-25 17:20:28.161591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.842 [2024-07-25 17:20:28.161610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:35.842 [2024-07-25 17:20:28.161619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:35.842 [2024-07-25 17:20:28.161628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.842 [2024-07-25 17:20:28.161638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:35.842 [2024-07-25 17:20:28.161648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:35.842 [2024-07-25 17:20:28.161657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:35.842 [2024-07-25 17:20:28.161677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:35.842 [2024-07-25 17:20:28.161687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161696] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:35.842 [2024-07-25 17:20:28.161706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:35.842 [2024-07-25 17:20:28.161717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.842 [2024-07-25 17:20:28.161739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:35.842 [2024-07-25 17:20:28.161749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:35.842 [2024-07-25 17:20:28.161761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:35.842 [2024-07-25 17:20:28.161771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:35.842 [2024-07-25 17:20:28.161781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:35.842 [2024-07-25 17:20:28.161791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:35.842 [2024-07-25 17:20:28.161803] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:35.842 [2024-07-25 17:20:28.161816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.161828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:35.842 [2024-07-25 17:20:28.161839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:35.842 [2024-07-25 17:20:28.161850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:35.842 [2024-07-25 17:20:28.161860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:35.842 [2024-07-25 17:20:28.161871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:35.842 [2024-07-25 17:20:28.161881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:35.842 [2024-07-25 17:20:28.161891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:35.842 [2024-07-25 17:20:28.161902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:35.842 [2024-07-25 17:20:28.161912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:35.842 [2024-07-25 17:20:28.161922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.161933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.161943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.161953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.161963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:35.842 [2024-07-25 17:20:28.161974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:35.842 [2024-07-25 17:20:28.161986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.162002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.842 [2024-07-25 17:20:28.162028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:35.842 [2024-07-25 17:20:28.162040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:35.842 [2024-07-25 17:20:28.162051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:35.842 [2024-07-25 17:20:28.162062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.162073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:35.842 [2024-07-25 17:20:28.162084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:29:35.842 [2024-07-25 17:20:28.162095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.204507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.204585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:35.842 [2024-07-25 17:20:28.204621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.347 ms 00:29:35.842 [2024-07-25 17:20:28.204633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.204754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.204769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:35.842 [2024-07-25 17:20:28.204781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:35.842 [2024-07-25 17:20:28.204790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.240561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.240628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:35.842 [2024-07-25 17:20:28.240660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.667 ms 00:29:35.842 [2024-07-25 17:20:28.240670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.240723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.240739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:35.842 [2024-07-25 17:20:28.240750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:35.842 [2024-07-25 17:20:28.240767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.241499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.241544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:35.842 [2024-07-25 17:20:28.241589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:29:35.842 [2024-07-25 17:20:28.241600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.241776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.241830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:35.842 [2024-07-25 17:20:28.241842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:29:35.842 [2024-07-25 17:20:28.241853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.257958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.842 [2024-07-25 17:20:28.258025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:35.842 [2024-07-25 17:20:28.258058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.074 ms 00:29:35.842 [2024-07-25 17:20:28.258073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.842 [2024-07-25 17:20:28.272887] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:35.842 [2024-07-25 17:20:28.272948] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:35.843 [2024-07-25 17:20:28.272981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.843 [2024-07-25 17:20:28.273003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:35.843 [2024-07-25 17:20:28.273017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.790 ms 00:29:35.843 [2024-07-25 17:20:28.273026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.843 [2024-07-25 17:20:28.297614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.843 [2024-07-25 17:20:28.297701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:35.843 [2024-07-25 17:20:28.297735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:29:35.843 [2024-07-25 17:20:28.297746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.101 [2024-07-25 17:20:28.310944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.101 [2024-07-25 17:20:28.311038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:36.101 [2024-07-25 17:20:28.311070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.132 ms 00:29:36.101 [2024-07-25 17:20:28.311080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.323662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.323717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:36.102 [2024-07-25 17:20:28.323747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.540 ms 00:29:36.102 [2024-07-25 17:20:28.323757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.324761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.324808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:36.102 [2024-07-25 17:20:28.324838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:29:36.102 [2024-07-25 17:20:28.324849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.398053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.398139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:36.102 [2024-07-25 17:20:28.398175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.174 ms 00:29:36.102 [2024-07-25 17:20:28.398194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.409468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:36.102 [2024-07-25 17:20:28.412592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.412646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:36.102 [2024-07-25 17:20:28.412676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.330 ms 00:29:36.102 [2024-07-25 17:20:28.412687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.412798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.412817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:36.102 [2024-07-25 17:20:28.412830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.102 [2024-07-25 17:20:28.412840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.414945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.415027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:36.102 [2024-07-25 17:20:28.415057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:29:36.102 [2024-07-25 17:20:28.415067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.415101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.415115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:36.102 [2024-07-25 17:20:28.415126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.102 [2024-07-25 17:20:28.415136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.415174] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:36.102 [2024-07-25 17:20:28.415199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.415214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:36.102 [2024-07-25 17:20:28.415224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:36.102 [2024-07-25 17:20:28.415250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.442468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.442536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:36.102 [2024-07-25 17:20:28.442567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.179 ms 00:29:36.102 [2024-07-25 17:20:28.442585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.442702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.102 [2024-07-25 17:20:28.442720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:36.102 [2024-07-25 17:20:28.442732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:29:36.102 [2024-07-25 17:20:28.442742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.102 [2024-07-25 17:20:28.450733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.199 ms, result 0 00:30:15.067  Copying: 932/1048576 [kB] (932 kBps) Copying: 5084/1048576 [kB] (4152 kBps) Copying: 28/1024 [MB] (23 MBps) Copying: 55/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (26 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 164/1024 [MB] (27 MBps) Copying: 192/1024 [MB] (28 MBps) Copying: 219/1024 [MB] (27 MBps) Copying: 247/1024 [MB] (27 MBps) Copying: 274/1024 [MB] (27 MBps) Copying: 303/1024 [MB] (28 MBps) Copying: 331/1024 [MB] (27 MBps) Copying: 359/1024 [MB] (27 MBps) Copying: 387/1024 [MB] (27 MBps) Copying: 414/1024 [MB] (27 MBps) Copying: 442/1024 [MB] (28 MBps) Copying: 470/1024 [MB] (28 MBps) Copying: 499/1024 [MB] (28 MBps) Copying: 527/1024 [MB] (28 MBps) Copying: 555/1024 [MB] (28 MBps) Copying: 583/1024 [MB] (28 MBps) Copying: 612/1024 [MB] (28 MBps) Copying: 640/1024 [MB] (28 MBps) Copying: 669/1024 [MB] (28 MBps) Copying: 697/1024 [MB] (28 MBps) Copying: 726/1024 [MB] (28 MBps) Copying: 755/1024 [MB] (28 MBps) Copying: 784/1024 [MB] (28 MBps) Copying: 813/1024 [MB] (29 MBps) Copying: 842/1024 [MB] (28 MBps) Copying: 870/1024 [MB] (28 MBps) Copying: 898/1024 [MB] (28 MBps) Copying: 926/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (28 MBps) Copying: 983/1024 [MB] (28 MBps) Copying: 1012/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-25 17:21:07.499675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.067 [2024-07-25 17:21:07.500083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:15.067 [2024-07-25 17:21:07.500246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:15.067 [2024-07-25 17:21:07.500273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.067 [2024-07-25 17:21:07.500314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:15.067 [2024-07-25 17:21:07.505333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.067 [2024-07-25 17:21:07.505374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:15.067 [2024-07-25 17:21:07.505391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:30:15.067 [2024-07-25 17:21:07.505403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.067 [2024-07-25 17:21:07.505657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.067 [2024-07-25 17:21:07.505675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:15.067 [2024-07-25 17:21:07.505696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:30:15.067 [2024-07-25 17:21:07.505708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.067 [2024-07-25 17:21:07.518177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.067 [2024-07-25 17:21:07.518396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:15.068 [2024-07-25 17:21:07.518524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.445 ms 00:30:15.068 [2024-07-25 17:21:07.518655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.068 [2024-07-25 17:21:07.524781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.068 [2024-07-25 17:21:07.524952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:15.068 [2024-07-25 17:21:07.525128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.970 ms 00:30:15.068 [2024-07-25 17:21:07.525161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.551856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.551899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:15.327 [2024-07-25 17:21:07.551930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.623 ms 00:30:15.327 [2024-07-25 17:21:07.551940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.566962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.567197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:15.327 [2024-07-25 17:21:07.567228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.967 ms 00:30:15.327 [2024-07-25 17:21:07.567241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.571183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.571225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:15.327 [2024-07-25 17:21:07.571271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:30:15.327 [2024-07-25 17:21:07.571282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.596254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.596291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:15.327 [2024-07-25 17:21:07.596321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.952 ms 00:30:15.327 [2024-07-25 17:21:07.596331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.621228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.621265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:15.327 [2024-07-25 17:21:07.621296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.860 ms 00:30:15.327 [2024-07-25 17:21:07.621305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.327 [2024-07-25 17:21:07.645696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.327 [2024-07-25 17:21:07.645733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:15.328 [2024-07-25 17:21:07.645763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.355 ms 00:30:15.328 [2024-07-25 17:21:07.645786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.328 [2024-07-25 17:21:07.670120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.328 [2024-07-25 17:21:07.670156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:15.328 [2024-07-25 17:21:07.670186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:30:15.328 [2024-07-25 17:21:07.670196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.328 [2024-07-25 17:21:07.670232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:15.328 [2024-07-25 17:21:07.670253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:15.328 [2024-07-25 17:21:07.670265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:30:15.328 [2024-07-25 17:21:07.670277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.670989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:15.328 [2024-07-25 17:21:07.671174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:15.329 [2024-07-25 17:21:07.671434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:15.329 [2024-07-25 17:21:07.671445] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9fc5255-4196-4a04-b04b-38d96588e30f 00:30:15.329 [2024-07-25 17:21:07.671456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:30:15.329 [2024-07-25 17:21:07.671471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137664 00:30:15.329 [2024-07-25 17:21:07.671481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135680 00:30:15.329 [2024-07-25 17:21:07.671491] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:30:15.329 [2024-07-25 17:21:07.671504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:15.329 [2024-07-25 17:21:07.671515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:15.329 [2024-07-25 17:21:07.671524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:15.329 [2024-07-25 17:21:07.671533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:15.329 [2024-07-25 17:21:07.671542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:15.329 [2024-07-25 17:21:07.671552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.329 [2024-07-25 17:21:07.671562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:15.329 [2024-07-25 17:21:07.671573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:30:15.329 [2024-07-25 17:21:07.671583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.685707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.329 [2024-07-25 17:21:07.685742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:15.329 [2024-07-25 17:21:07.685778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.102 ms 00:30:15.329 [2024-07-25 17:21:07.685797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.686313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.329 [2024-07-25 17:21:07.686338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:15.329 [2024-07-25 17:21:07.686353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:30:15.329 [2024-07-25 17:21:07.686364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.717953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.329 [2024-07-25 17:21:07.718022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.329 [2024-07-25 17:21:07.718054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.329 [2024-07-25 17:21:07.718065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.718130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.329 [2024-07-25 17:21:07.718160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.329 [2024-07-25 17:21:07.718170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.329 [2024-07-25 17:21:07.718180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.718272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.329 [2024-07-25 17:21:07.718295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.329 [2024-07-25 17:21:07.718307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.329 [2024-07-25 17:21:07.718317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.329 [2024-07-25 17:21:07.718337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.329 [2024-07-25 17:21:07.718349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.329 [2024-07-25 17:21:07.718360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.329 [2024-07-25 17:21:07.718394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.799322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.799382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.588 [2024-07-25 17:21:07.799414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.799424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.588 [2024-07-25 17:21:07.869129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:15.588 [2024-07-25 17:21:07.869247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:15.588 [2024-07-25 17:21:07.869352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:15.588 [2024-07-25 17:21:07.869520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:15.588 [2024-07-25 17:21:07.869606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:15.588 [2024-07-25 17:21:07.869700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.588 [2024-07-25 17:21:07.869782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:15.588 [2024-07-25 17:21:07.869792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.588 [2024-07-25 17:21:07.869802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.588 [2024-07-25 17:21:07.869934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.229 ms, result 0 00:30:16.523 00:30:16.523 00:30:16.523 17:21:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:18.431 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:18.431 17:21:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:18.431 [2024-07-25 17:21:10.662360] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:18.431 [2024-07-25 17:21:10.662507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84752 ] 00:30:18.431 [2024-07-25 17:21:10.824387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.689 [2024-07-25 17:21:11.047377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.948 [2024-07-25 17:21:11.344353] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:18.948 [2024-07-25 17:21:11.344444] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:19.207 [2024-07-25 17:21:11.503610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.207 [2024-07-25 17:21:11.503655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:19.207 [2024-07-25 17:21:11.503689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:19.207 [2024-07-25 17:21:11.503700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.207 [2024-07-25 17:21:11.503756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.207 [2024-07-25 17:21:11.503773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:19.207 [2024-07-25 17:21:11.503784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:19.207 [2024-07-25 17:21:11.503797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.503826] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:19.208 [2024-07-25 17:21:11.504742] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:19.208 [2024-07-25 17:21:11.504785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.504798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:19.208 [2024-07-25 17:21:11.504810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:30:19.208 [2024-07-25 17:21:11.504820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.506823] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:19.208 [2024-07-25 17:21:11.521052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.521100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:19.208 [2024-07-25 17:21:11.521133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.230 ms 00:30:19.208 [2024-07-25 17:21:11.521143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.521205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.521225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:19.208 [2024-07-25 17:21:11.521236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:19.208 [2024-07-25 17:21:11.521245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.529711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.529746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:19.208 [2024-07-25 17:21:11.529775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.386 ms 00:30:19.208 [2024-07-25 17:21:11.529785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.529870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.529887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:19.208 [2024-07-25 17:21:11.529898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:19.208 [2024-07-25 17:21:11.529907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.529961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.529977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:19.208 [2024-07-25 17:21:11.530003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:19.208 [2024-07-25 17:21:11.530051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.530084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:19.208 [2024-07-25 17:21:11.534326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.534374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:19.208 [2024-07-25 17:21:11.534403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:30:19.208 [2024-07-25 17:21:11.534412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.534457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.534472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:19.208 [2024-07-25 17:21:11.534483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:19.208 [2024-07-25 17:21:11.534492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.534549] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:19.208 [2024-07-25 17:21:11.534581] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:19.208 [2024-07-25 17:21:11.534639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:19.208 [2024-07-25 17:21:11.534680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:19.208 [2024-07-25 17:21:11.534770] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:19.208 [2024-07-25 17:21:11.534784] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:19.208 [2024-07-25 17:21:11.534796] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:19.208 [2024-07-25 17:21:11.534809] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:19.208 [2024-07-25 17:21:11.534820] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:19.208 [2024-07-25 17:21:11.534831] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:19.208 [2024-07-25 17:21:11.534840] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:19.208 [2024-07-25 17:21:11.534850] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:19.208 [2024-07-25 17:21:11.534859] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:19.208 [2024-07-25 17:21:11.534870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.534884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:19.208 [2024-07-25 17:21:11.534895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:30:19.208 [2024-07-25 17:21:11.534904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.535047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.208 [2024-07-25 17:21:11.535064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:19.208 [2024-07-25 17:21:11.535075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:19.208 [2024-07-25 17:21:11.535085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.208 [2024-07-25 17:21:11.535180] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:19.208 [2024-07-25 17:21:11.535196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:19.208 [2024-07-25 17:21:11.535218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:19.208 [2024-07-25 17:21:11.535248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535257] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:19.208 [2024-07-25 17:21:11.535275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:19.208 [2024-07-25 17:21:11.535309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:19.208 [2024-07-25 17:21:11.535334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:19.208 [2024-07-25 17:21:11.535358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:19.208 [2024-07-25 17:21:11.535367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:19.208 [2024-07-25 17:21:11.535380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:19.208 [2024-07-25 17:21:11.535389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:19.208 [2024-07-25 17:21:11.535408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:19.208 [2024-07-25 17:21:11.535447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:19.208 [2024-07-25 17:21:11.535476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:19.208 [2024-07-25 17:21:11.535504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:19.208 [2024-07-25 17:21:11.535530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:19.208 [2024-07-25 17:21:11.535558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:19.208 [2024-07-25 17:21:11.535575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:19.208 [2024-07-25 17:21:11.535585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:19.208 [2024-07-25 17:21:11.535593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:19.208 [2024-07-25 17:21:11.535602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:19.208 [2024-07-25 17:21:11.535612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:19.208 [2024-07-25 17:21:11.535621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:19.208 [2024-07-25 17:21:11.535638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:19.208 [2024-07-25 17:21:11.535647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.208 [2024-07-25 17:21:11.535656] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:19.208 [2024-07-25 17:21:11.535666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:19.208 [2024-07-25 17:21:11.535675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:19.208 [2024-07-25 17:21:11.535695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.209 [2024-07-25 17:21:11.535706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:19.209 [2024-07-25 17:21:11.535716] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:19.209 [2024-07-25 17:21:11.535726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:19.209 [2024-07-25 17:21:11.535736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:19.209 [2024-07-25 17:21:11.535746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:19.209 [2024-07-25 17:21:11.535756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:19.209 [2024-07-25 17:21:11.535767] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:19.209 [2024-07-25 17:21:11.535780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:19.209 [2024-07-25 17:21:11.535803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:19.209 [2024-07-25 17:21:11.535813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:19.209 [2024-07-25 17:21:11.535823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:19.209 [2024-07-25 17:21:11.535833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:19.209 [2024-07-25 17:21:11.535843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:19.209 [2024-07-25 17:21:11.535854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:19.209 [2024-07-25 17:21:11.535863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:19.209 [2024-07-25 17:21:11.535874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:19.209 [2024-07-25 17:21:11.535883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:19.209 [2024-07-25 17:21:11.535933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:19.209 [2024-07-25 17:21:11.535944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:19.209 [2024-07-25 17:21:11.535970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:19.209 [2024-07-25 17:21:11.535980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:19.209 [2024-07-25 17:21:11.535990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:19.209 [2024-07-25 17:21:11.536001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.536011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:19.209 [2024-07-25 17:21:11.536022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:30:19.209 [2024-07-25 17:21:11.536033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.579630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.579684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:19.209 [2024-07-25 17:21:11.579718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.523 ms 00:30:19.209 [2024-07-25 17:21:11.579738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.579846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.579862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:19.209 [2024-07-25 17:21:11.579880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:19.209 [2024-07-25 17:21:11.579890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.617760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.617804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:19.209 [2024-07-25 17:21:11.617836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.788 ms 00:30:19.209 [2024-07-25 17:21:11.617846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.617898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.617912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:19.209 [2024-07-25 17:21:11.617923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:19.209 [2024-07-25 17:21:11.617938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.618691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.618717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:19.209 [2024-07-25 17:21:11.618730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:30:19.209 [2024-07-25 17:21:11.618742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.618922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.618941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:19.209 [2024-07-25 17:21:11.618953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:30:19.209 [2024-07-25 17:21:11.618978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.634949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.635059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:19.209 [2024-07-25 17:21:11.635077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.927 ms 00:30:19.209 [2024-07-25 17:21:11.635093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.209 [2024-07-25 17:21:11.649155] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:19.209 [2024-07-25 17:21:11.649194] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:19.209 [2024-07-25 17:21:11.649226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.209 [2024-07-25 17:21:11.649237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:19.209 [2024-07-25 17:21:11.649248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.012 ms 00:30:19.209 [2024-07-25 17:21:11.649257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.674825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.674868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:19.468 [2024-07-25 17:21:11.674900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:30:19.468 [2024-07-25 17:21:11.674910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.687876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.687912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:19.468 [2024-07-25 17:21:11.687941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:30:19.468 [2024-07-25 17:21:11.687950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.700273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.700309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:19.468 [2024-07-25 17:21:11.700339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.252 ms 00:30:19.468 [2024-07-25 17:21:11.700348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.701050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.701083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:19.468 [2024-07-25 17:21:11.701097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:30:19.468 [2024-07-25 17:21:11.701107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.769929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.770008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:19.468 [2024-07-25 17:21:11.770060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.794 ms 00:30:19.468 [2024-07-25 17:21:11.770080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.781869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:19.468 [2024-07-25 17:21:11.785212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.785245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:19.468 [2024-07-25 17:21:11.785261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.054 ms 00:30:19.468 [2024-07-25 17:21:11.785271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.785391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.785410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:19.468 [2024-07-25 17:21:11.785422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:19.468 [2024-07-25 17:21:11.785432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.786483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.786517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:19.468 [2024-07-25 17:21:11.786531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:30:19.468 [2024-07-25 17:21:11.786542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.786589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.786603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:19.468 [2024-07-25 17:21:11.786614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:19.468 [2024-07-25 17:21:11.786624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.786686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:19.468 [2024-07-25 17:21:11.786703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.786717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:19.468 [2024-07-25 17:21:11.786729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:19.468 [2024-07-25 17:21:11.786739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.813666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.813705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:19.468 [2024-07-25 17:21:11.813737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.901 ms 00:30:19.468 [2024-07-25 17:21:11.813755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.813843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.468 [2024-07-25 17:21:11.813859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:19.468 [2024-07-25 17:21:11.813871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:19.468 [2024-07-25 17:21:11.813881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.468 [2024-07-25 17:21:11.815548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.233 ms, result 0 00:31:04.370  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (22 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (22 MBps) Copying: 161/1024 [MB] (23 MBps) Copying: 184/1024 [MB] (23 MBps) Copying: 207/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (22 MBps) Copying: 275/1024 [MB] (22 MBps) Copying: 299/1024 [MB] (23 MBps) Copying: 322/1024 [MB] (22 MBps) Copying: 344/1024 [MB] (22 MBps) Copying: 367/1024 [MB] (23 MBps) Copying: 390/1024 [MB] (22 MBps) Copying: 413/1024 [MB] (23 MBps) Copying: 436/1024 [MB] (22 MBps) Copying: 459/1024 [MB] (22 MBps) Copying: 482/1024 [MB] (22 MBps) Copying: 505/1024 [MB] (23 MBps) Copying: 528/1024 [MB] (23 MBps) Copying: 551/1024 [MB] (22 MBps) Copying: 574/1024 [MB] (23 MBps) Copying: 597/1024 [MB] (23 MBps) Copying: 620/1024 [MB] (23 MBps) Copying: 644/1024 [MB] (23 MBps) Copying: 667/1024 [MB] (23 MBps) Copying: 690/1024 [MB] (22 MBps) Copying: 714/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (22 MBps) Copying: 758/1024 [MB] (22 MBps) Copying: 781/1024 [MB] (22 MBps) Copying: 804/1024 [MB] (22 MBps) Copying: 826/1024 [MB] (22 MBps) Copying: 849/1024 [MB] (22 MBps) Copying: 872/1024 [MB] (22 MBps) Copying: 895/1024 [MB] (23 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 942/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (23 MBps) Copying: 1012/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-25 17:21:56.633920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.634023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:04.370 [2024-07-25 17:21:56.634044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:04.370 [2024-07-25 17:21:56.634055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.634086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:04.370 [2024-07-25 17:21:56.638392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.638589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:04.370 [2024-07-25 17:21:56.638756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:31:04.370 [2024-07-25 17:21:56.638904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.639266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.639334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:04.370 [2024-07-25 17:21:56.639572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:31:04.370 [2024-07-25 17:21:56.639622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.642850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.643038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:04.370 [2024-07-25 17:21:56.643154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:31:04.370 [2024-07-25 17:21:56.643283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.648984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.649149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:04.370 [2024-07-25 17:21:56.649256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:31:04.370 [2024-07-25 17:21:56.649389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.675568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.675738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:04.370 [2024-07-25 17:21:56.675853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:31:04.370 [2024-07-25 17:21:56.675898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.690738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.690910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:04.370 [2024-07-25 17:21:56.691074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.730 ms 00:31:04.370 [2024-07-25 17:21:56.691187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.695360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.695404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:04.370 [2024-07-25 17:21:56.695443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.115 ms 00:31:04.370 [2024-07-25 17:21:56.695454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.370 [2024-07-25 17:21:56.720057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.370 [2024-07-25 17:21:56.720095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:04.371 [2024-07-25 17:21:56.720109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.568 ms 00:31:04.371 [2024-07-25 17:21:56.720118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.371 [2024-07-25 17:21:56.744260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.371 [2024-07-25 17:21:56.744298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:04.371 [2024-07-25 17:21:56.744312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.105 ms 00:31:04.371 [2024-07-25 17:21:56.744321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.371 [2024-07-25 17:21:56.767990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.371 [2024-07-25 17:21:56.768040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:04.371 [2024-07-25 17:21:56.768068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:31:04.371 [2024-07-25 17:21:56.768077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.371 [2024-07-25 17:21:56.791724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.371 [2024-07-25 17:21:56.791762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:04.371 [2024-07-25 17:21:56.791776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.588 ms 00:31:04.371 [2024-07-25 17:21:56.791784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.371 [2024-07-25 17:21:56.791820] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:04.371 [2024-07-25 17:21:56.791839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:04.371 [2024-07-25 17:21:56.791851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:31:04.371 [2024-07-25 17:21:56.791861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.791971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:04.371 [2024-07-25 17:21:56.792600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:04.372 [2024-07-25 17:21:56.792973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:04.372 [2024-07-25 17:21:56.792991] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9fc5255-4196-4a04-b04b-38d96588e30f 00:31:04.372 [2024-07-25 17:21:56.793016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:31:04.372 [2024-07-25 17:21:56.793030] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:04.372 [2024-07-25 17:21:56.793040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:04.372 [2024-07-25 17:21:56.793050] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:04.372 [2024-07-25 17:21:56.793059] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:04.372 [2024-07-25 17:21:56.793070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:04.372 [2024-07-25 17:21:56.793089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:04.372 [2024-07-25 17:21:56.793098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:04.372 [2024-07-25 17:21:56.793107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:04.372 [2024-07-25 17:21:56.793117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.372 [2024-07-25 17:21:56.793127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:04.372 [2024-07-25 17:21:56.793143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:31:04.372 [2024-07-25 17:21:56.793153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.372 [2024-07-25 17:21:56.806818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.372 [2024-07-25 17:21:56.806853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:04.372 [2024-07-25 17:21:56.806878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.643 ms 00:31:04.372 [2024-07-25 17:21:56.806887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.372 [2024-07-25 17:21:56.807474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:04.372 [2024-07-25 17:21:56.807510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:04.372 [2024-07-25 17:21:56.807525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:31:04.372 [2024-07-25 17:21:56.807542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.630 [2024-07-25 17:21:56.838315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.630 [2024-07-25 17:21:56.838355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:04.630 [2024-07-25 17:21:56.838370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.630 [2024-07-25 17:21:56.838378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.630 [2024-07-25 17:21:56.838425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.630 [2024-07-25 17:21:56.838439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:04.630 [2024-07-25 17:21:56.838448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.630 [2024-07-25 17:21:56.838462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.630 [2024-07-25 17:21:56.838537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.630 [2024-07-25 17:21:56.838553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:04.630 [2024-07-25 17:21:56.838563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.630 [2024-07-25 17:21:56.838582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.630 [2024-07-25 17:21:56.838602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.630 [2024-07-25 17:21:56.838614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:04.630 [2024-07-25 17:21:56.838623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.630 [2024-07-25 17:21:56.838631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.630 [2024-07-25 17:21:56.918211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.630 [2024-07-25 17:21:56.918470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:04.630 [2024-07-25 17:21:56.918602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.630 [2024-07-25 17:21:56.918659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:04.631 [2024-07-25 17:21:56.987408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:04.631 [2024-07-25 17:21:56.987537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:04.631 [2024-07-25 17:21:56.987641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:04.631 [2024-07-25 17:21:56.987807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:04.631 [2024-07-25 17:21:56.987887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.987943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.987958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:04.631 [2024-07-25 17:21:56.987969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.987993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.988106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:04.631 [2024-07-25 17:21:56.988132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:04.631 [2024-07-25 17:21:56.988152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:04.631 [2024-07-25 17:21:56.988169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:04.631 [2024-07-25 17:21:56.988383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.431 ms, result 0 00:31:05.567 00:31:05.567 00:31:05.567 17:21:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:07.469 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:07.469 17:21:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:07.469 17:21:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:07.469 17:21:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:07.469 17:21:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:07.470 17:21:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82671 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82671 ']' 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82671 00:31:07.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82671) - No such process 00:31:07.727 Process with pid 82671 is not found 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82671 is not found' 00:31:07.727 17:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:07.984 Remove shared memory files 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:07.984 ************************************ 00:31:07.984 END TEST ftl_dirty_shutdown 00:31:07.984 ************************************ 00:31:07.984 00:31:07.984 real 4m14.404s 00:31:07.984 user 5m5.056s 00:31:07.984 sys 0m40.918s 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.984 17:22:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:07.984 17:22:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:07.984 17:22:00 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:07.984 17:22:00 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.984 17:22:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:07.984 ************************************ 00:31:07.984 START TEST ftl_upgrade_shutdown 00:31:07.984 ************************************ 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:07.984 * Looking for test storage... 00:31:07.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:07.984 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:08.243 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85292 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85292 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85292 ']' 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.244 17:22:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:08.244 [2024-07-25 17:22:00.589543] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:08.244 [2024-07-25 17:22:00.589717] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85292 ] 00:31:08.501 [2024-07-25 17:22:00.765383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.758 [2024-07-25 17:22:01.041940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:09.323 17:22:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:09.890 { 00:31:09.890 "name": "basen1", 00:31:09.890 "aliases": [ 00:31:09.890 "6f135d21-71ff-4103-9255-b09df5b2bd35" 00:31:09.890 ], 00:31:09.890 "product_name": "NVMe disk", 00:31:09.890 "block_size": 4096, 00:31:09.890 "num_blocks": 1310720, 00:31:09.890 "uuid": "6f135d21-71ff-4103-9255-b09df5b2bd35", 00:31:09.890 "assigned_rate_limits": { 00:31:09.890 "rw_ios_per_sec": 0, 00:31:09.890 "rw_mbytes_per_sec": 0, 00:31:09.890 "r_mbytes_per_sec": 0, 00:31:09.890 "w_mbytes_per_sec": 0 00:31:09.890 }, 00:31:09.890 "claimed": true, 00:31:09.890 "claim_type": "read_many_write_one", 00:31:09.890 "zoned": false, 00:31:09.890 "supported_io_types": { 00:31:09.890 "read": true, 00:31:09.890 "write": true, 00:31:09.890 "unmap": true, 00:31:09.890 "flush": true, 00:31:09.890 "reset": true, 00:31:09.890 "nvme_admin": true, 00:31:09.890 "nvme_io": true, 00:31:09.890 "nvme_io_md": false, 00:31:09.890 "write_zeroes": true, 00:31:09.890 "zcopy": false, 00:31:09.890 "get_zone_info": false, 00:31:09.890 "zone_management": false, 00:31:09.890 "zone_append": false, 00:31:09.890 "compare": true, 00:31:09.890 "compare_and_write": false, 00:31:09.890 "abort": true, 00:31:09.890 "seek_hole": false, 00:31:09.890 "seek_data": false, 00:31:09.890 "copy": true, 00:31:09.890 "nvme_iov_md": false 00:31:09.890 }, 00:31:09.890 "driver_specific": { 00:31:09.890 "nvme": [ 00:31:09.890 { 00:31:09.890 "pci_address": "0000:00:11.0", 00:31:09.890 "trid": { 00:31:09.890 "trtype": "PCIe", 00:31:09.890 "traddr": "0000:00:11.0" 00:31:09.890 }, 00:31:09.890 "ctrlr_data": { 00:31:09.890 "cntlid": 0, 00:31:09.890 "vendor_id": "0x1b36", 00:31:09.890 "model_number": "QEMU NVMe Ctrl", 00:31:09.890 "serial_number": "12341", 00:31:09.890 "firmware_revision": "8.0.0", 00:31:09.890 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:09.890 "oacs": { 00:31:09.890 "security": 0, 00:31:09.890 "format": 1, 00:31:09.890 "firmware": 0, 00:31:09.890 "ns_manage": 1 00:31:09.890 }, 00:31:09.890 "multi_ctrlr": false, 00:31:09.890 "ana_reporting": false 00:31:09.890 }, 00:31:09.890 "vs": { 00:31:09.890 "nvme_version": "1.4" 00:31:09.890 }, 00:31:09.890 "ns_data": { 00:31:09.890 "id": 1, 00:31:09.890 "can_share": false 00:31:09.890 } 00:31:09.890 } 00:31:09.890 ], 00:31:09.890 "mp_policy": "active_passive" 00:31:09.890 } 00:31:09.890 } 00:31:09.890 ]' 00:31:09.890 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:10.149 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:10.407 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=bcf61d49-3d73-4b53-8fe5-639a130185c5 00:31:10.407 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:10.407 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bcf61d49-3d73-4b53-8fe5-639a130185c5 00:31:10.407 17:22:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:10.666 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=e17407a3-0b67-4016-8946-17bfa8dbfc3c 00:31:10.666 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u e17407a3-0b67-4016-8946-17bfa8dbfc3c 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=cf0019af-601e-44a7-8bbe-7014f9c25c3a 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z cf0019af-601e-44a7-8bbe-7014f9c25c3a ]] 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 cf0019af-601e-44a7-8bbe-7014f9c25c3a 5120 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=cf0019af-601e-44a7-8bbe-7014f9c25c3a 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size cf0019af-601e-44a7-8bbe-7014f9c25c3a 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cf0019af-601e-44a7-8bbe-7014f9c25c3a 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:10.924 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cf0019af-601e-44a7-8bbe-7014f9c25c3a 00:31:11.187 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:11.187 { 00:31:11.187 "name": "cf0019af-601e-44a7-8bbe-7014f9c25c3a", 00:31:11.187 "aliases": [ 00:31:11.187 "lvs/basen1p0" 00:31:11.187 ], 00:31:11.187 "product_name": "Logical Volume", 00:31:11.187 "block_size": 4096, 00:31:11.187 "num_blocks": 5242880, 00:31:11.187 "uuid": "cf0019af-601e-44a7-8bbe-7014f9c25c3a", 00:31:11.187 "assigned_rate_limits": { 00:31:11.187 "rw_ios_per_sec": 0, 00:31:11.187 "rw_mbytes_per_sec": 0, 00:31:11.187 "r_mbytes_per_sec": 0, 00:31:11.187 "w_mbytes_per_sec": 0 00:31:11.187 }, 00:31:11.187 "claimed": false, 00:31:11.187 "zoned": false, 00:31:11.187 "supported_io_types": { 00:31:11.187 "read": true, 00:31:11.187 "write": true, 00:31:11.187 "unmap": true, 00:31:11.187 "flush": false, 00:31:11.187 "reset": true, 00:31:11.187 "nvme_admin": false, 00:31:11.187 "nvme_io": false, 00:31:11.187 "nvme_io_md": false, 00:31:11.187 "write_zeroes": true, 00:31:11.187 "zcopy": false, 00:31:11.187 "get_zone_info": false, 00:31:11.187 "zone_management": false, 00:31:11.187 "zone_append": false, 00:31:11.187 "compare": false, 00:31:11.187 "compare_and_write": false, 00:31:11.187 "abort": false, 00:31:11.187 "seek_hole": true, 00:31:11.187 "seek_data": true, 00:31:11.188 "copy": false, 00:31:11.188 "nvme_iov_md": false 00:31:11.188 }, 00:31:11.188 "driver_specific": { 00:31:11.188 "lvol": { 00:31:11.188 "lvol_store_uuid": "e17407a3-0b67-4016-8946-17bfa8dbfc3c", 00:31:11.188 "base_bdev": "basen1", 00:31:11.188 "thin_provision": true, 00:31:11.188 "num_allocated_clusters": 0, 00:31:11.188 "snapshot": false, 00:31:11.188 "clone": false, 00:31:11.188 "esnap_clone": false 00:31:11.188 } 00:31:11.188 } 00:31:11.188 } 00:31:11.188 ]' 00:31:11.188 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:11.446 17:22:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:11.705 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:11.705 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:11.705 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:11.964 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:11.964 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:11.964 17:22:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d cf0019af-601e-44a7-8bbe-7014f9c25c3a -c cachen1p0 --l2p_dram_limit 2 00:31:12.224 [2024-07-25 17:22:04.533556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.533630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:12.224 [2024-07-25 17:22:04.533652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:12.224 [2024-07-25 17:22:04.533665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.533750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.533769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:12.224 [2024-07-25 17:22:04.533781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:12.224 [2024-07-25 17:22:04.533794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.533820] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:12.224 [2024-07-25 17:22:04.534859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:12.224 [2024-07-25 17:22:04.534903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.534921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:12.224 [2024-07-25 17:22:04.534934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.090 ms 00:31:12.224 [2024-07-25 17:22:04.534949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.535075] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 910dd370-4ca2-438f-b595-e5eb41cd831c 00:31:12.224 [2024-07-25 17:22:04.537542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.537578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:12.224 [2024-07-25 17:22:04.537612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:12.224 [2024-07-25 17:22:04.537623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.550060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.550098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:12.224 [2024-07-25 17:22:04.550133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.349 ms 00:31:12.224 [2024-07-25 17:22:04.550144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.550199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.550215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:12.224 [2024-07-25 17:22:04.550228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:12.224 [2024-07-25 17:22:04.550239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.550314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.550330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:12.224 [2024-07-25 17:22:04.550347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:12.224 [2024-07-25 17:22:04.550358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.550391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:12.224 [2024-07-25 17:22:04.555747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.555961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:12.224 [2024-07-25 17:22:04.556108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.366 ms 00:31:12.224 [2024-07-25 17:22:04.556165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.556349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.556402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:12.224 [2024-07-25 17:22:04.556441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:12.224 [2024-07-25 17:22:04.556480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.556569] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:12.224 [2024-07-25 17:22:04.556809] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:12.224 [2024-07-25 17:22:04.556889] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:12.224 [2024-07-25 17:22:04.557114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:31:12.224 [2024-07-25 17:22:04.557164] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:12.224 [2024-07-25 17:22:04.557183] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:12.224 [2024-07-25 17:22:04.557196] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:12.224 [2024-07-25 17:22:04.557214] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:12.224 [2024-07-25 17:22:04.557225] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:12.224 [2024-07-25 17:22:04.557239] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:12.224 [2024-07-25 17:22:04.557252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.557265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:12.224 [2024-07-25 17:22:04.557278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.685 ms 00:31:12.224 [2024-07-25 17:22:04.557291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.557410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.224 [2024-07-25 17:22:04.557427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:12.224 [2024-07-25 17:22:04.557440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:31:12.224 [2024-07-25 17:22:04.557456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.224 [2024-07-25 17:22:04.557559] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:12.224 [2024-07-25 17:22:04.557581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:12.224 [2024-07-25 17:22:04.557593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:12.224 [2024-07-25 17:22:04.557607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:12.224 [2024-07-25 17:22:04.557656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:12.224 [2024-07-25 17:22:04.557692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:12.224 [2024-07-25 17:22:04.557703] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:12.224 [2024-07-25 17:22:04.557715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:12.224 [2024-07-25 17:22:04.557755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:12.224 [2024-07-25 17:22:04.557765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:12.224 [2024-07-25 17:22:04.557790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:12.224 [2024-07-25 17:22:04.557803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:12.224 [2024-07-25 17:22:04.557828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:12.224 [2024-07-25 17:22:04.557839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.224 [2024-07-25 17:22:04.557851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:12.224 [2024-07-25 17:22:04.557861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:12.224 [2024-07-25 17:22:04.557874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:12.225 [2024-07-25 17:22:04.557884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:12.225 [2024-07-25 17:22:04.557897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:12.225 [2024-07-25 17:22:04.557907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:12.225 [2024-07-25 17:22:04.557919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:12.225 [2024-07-25 17:22:04.557929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:12.225 [2024-07-25 17:22:04.557959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:12.225 [2024-07-25 17:22:04.557970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:12.225 [2024-07-25 17:22:04.558000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:12.225 [2024-07-25 17:22:04.558011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:12.225 [2024-07-25 17:22:04.558024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:12.225 [2024-07-25 17:22:04.558035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:12.225 [2024-07-25 17:22:04.558051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:12.225 [2024-07-25 17:22:04.558091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:12.225 [2024-07-25 17:22:04.558104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:12.225 [2024-07-25 17:22:04.558130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:12.225 [2024-07-25 17:22:04.558184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:12.225 [2024-07-25 17:22:04.558195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558207] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:12.225 [2024-07-25 17:22:04.558219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:12.225 [2024-07-25 17:22:04.558233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:12.225 [2024-07-25 17:22:04.558244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:12.225 [2024-07-25 17:22:04.558259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:12.225 [2024-07-25 17:22:04.558270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:12.225 [2024-07-25 17:22:04.558286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:12.225 [2024-07-25 17:22:04.558297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:12.225 [2024-07-25 17:22:04.558325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:12.225 [2024-07-25 17:22:04.558336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:12.225 [2024-07-25 17:22:04.558353] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:12.225 [2024-07-25 17:22:04.558370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:12.225 [2024-07-25 17:22:04.558397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:12.225 [2024-07-25 17:22:04.558436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:12.225 [2024-07-25 17:22:04.558448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:12.225 [2024-07-25 17:22:04.558461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:12.225 [2024-07-25 17:22:04.558472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:12.225 [2024-07-25 17:22:04.558566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:12.225 [2024-07-25 17:22:04.558579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:12.225 [2024-07-25 17:22:04.558605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:12.225 [2024-07-25 17:22:04.558618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:12.225 [2024-07-25 17:22:04.558629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:12.225 [2024-07-25 17:22:04.558644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.225 [2024-07-25 17:22:04.558684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:12.225 [2024-07-25 17:22:04.558700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:31:12.225 [2024-07-25 17:22:04.558712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.225 [2024-07-25 17:22:04.558772] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:12.225 [2024-07-25 17:22:04.558789] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:15.518 [2024-07-25 17:22:07.480757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.480846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:15.518 [2024-07-25 17:22:07.480887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2921.991 ms 00:31:15.518 [2024-07-25 17:22:07.480900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.517744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.517793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:15.518 [2024-07-25 17:22:07.517832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.545 ms 00:31:15.518 [2024-07-25 17:22:07.517843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.517952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.517970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:15.518 [2024-07-25 17:22:07.518004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:15.518 [2024-07-25 17:22:07.518054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.555594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.555634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:15.518 [2024-07-25 17:22:07.555670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.483 ms 00:31:15.518 [2024-07-25 17:22:07.555681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.555725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.555739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:15.518 [2024-07-25 17:22:07.555758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:15.518 [2024-07-25 17:22:07.555768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.556828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.557053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:15.518 [2024-07-25 17:22:07.557185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:31:15.518 [2024-07-25 17:22:07.557207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.557278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.557299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:15.518 [2024-07-25 17:22:07.557314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:15.518 [2024-07-25 17:22:07.557326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.575285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.575322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:15.518 [2024-07-25 17:22:07.575357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.930 ms 00:31:15.518 [2024-07-25 17:22:07.575368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.587471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:15.518 [2024-07-25 17:22:07.588849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.588898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:15.518 [2024-07-25 17:22:07.588914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.399 ms 00:31:15.518 [2024-07-25 17:22:07.588926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.624906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.624970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:15.518 [2024-07-25 17:22:07.625004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.951 ms 00:31:15.518 [2024-07-25 17:22:07.625070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.625177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.625197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:15.518 [2024-07-25 17:22:07.625211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:31:15.518 [2024-07-25 17:22:07.625228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.654442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.654497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:15.518 [2024-07-25 17:22:07.654514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.151 ms 00:31:15.518 [2024-07-25 17:22:07.654531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.680846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.680906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:15.518 [2024-07-25 17:22:07.680922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.288 ms 00:31:15.518 [2024-07-25 17:22:07.680935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.681696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.681759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:15.518 [2024-07-25 17:22:07.681792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:31:15.518 [2024-07-25 17:22:07.681805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.762164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.762243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:15.518 [2024-07-25 17:22:07.762262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 80.315 ms 00:31:15.518 [2024-07-25 17:22:07.762289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.788712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.788770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:15.518 [2024-07-25 17:22:07.788786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.363 ms 00:31:15.518 [2024-07-25 17:22:07.788804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.813966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.814030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:15.518 [2024-07-25 17:22:07.814056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.121 ms 00:31:15.518 [2024-07-25 17:22:07.814070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.839356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.839429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:15.518 [2024-07-25 17:22:07.839446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.246 ms 00:31:15.518 [2024-07-25 17:22:07.839459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.839506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.839526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:15.518 [2024-07-25 17:22:07.839539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:15.518 [2024-07-25 17:22:07.839554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.839641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.518 [2024-07-25 17:22:07.839665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:15.518 [2024-07-25 17:22:07.839677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:15.518 [2024-07-25 17:22:07.839690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.518 [2024-07-25 17:22:07.841358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3307.197 ms, result 0 00:31:15.518 { 00:31:15.518 "name": "ftl", 00:31:15.518 "uuid": "910dd370-4ca2-438f-b595-e5eb41cd831c" 00:31:15.518 } 00:31:15.518 17:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:15.777 [2024-07-25 17:22:08.099987] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.777 17:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:16.036 17:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:16.294 [2024-07-25 17:22:08.652627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:16.295 17:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:16.553 [2024-07-25 17:22:08.863351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:16.553 17:22:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:16.812 Fill FTL, iteration 1 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:16.812 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85416 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85416 /var/tmp/spdk.tgt.sock 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85416 ']' 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:16.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.813 17:22:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:17.072 [2024-07-25 17:22:09.327657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:17.072 [2024-07-25 17:22:09.328172] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85416 ] 00:31:17.072 [2024-07-25 17:22:09.494320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.330 [2024-07-25 17:22:09.768167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.267 17:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:18.267 17:22:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:18.267 17:22:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:18.526 ftln1 00:31:18.526 17:22:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:18.526 17:22:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85416 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85416 ']' 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85416 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85416 00:31:18.784 killing process with pid 85416 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85416' 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85416 00:31:18.784 17:22:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85416 00:31:21.315 17:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:21.315 17:22:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:21.315 [2024-07-25 17:22:13.300775] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:21.315 [2024-07-25 17:22:13.300952] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85469 ] 00:31:21.315 [2024-07-25 17:22:13.474975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.315 [2024-07-25 17:22:13.697694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.877  Copying: 218/1024 [MB] (218 MBps) Copying: 433/1024 [MB] (215 MBps) Copying: 647/1024 [MB] (214 MBps) Copying: 862/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:31:27.877 00:31:27.877 Calculate MD5 checksum, iteration 1 00:31:27.877 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:27.877 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:27.877 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:27.877 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:27.878 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:27.878 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:27.878 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:27.878 17:22:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:27.878 [2024-07-25 17:22:20.108801] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:27.878 [2024-07-25 17:22:20.109016] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85539 ] 00:31:27.878 [2024-07-25 17:22:20.281552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.136 [2024-07-25 17:22:20.493697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.672  Copying: 480/1024 [MB] (480 MBps) Copying: 970/1024 [MB] (490 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:31:31.672 00:31:31.672 17:22:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:31.672 17:22:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:33.573 Fill FTL, iteration 2 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f1cdb3a15ee03140c3b4672b25f70146 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:33.573 17:22:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:33.573 [2024-07-25 17:22:25.953013] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:33.573 [2024-07-25 17:22:25.953198] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85599 ] 00:31:33.831 [2024-07-25 17:22:26.121052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.090 [2024-07-25 17:22:26.341397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.214  Copying: 219/1024 [MB] (219 MBps) Copying: 431/1024 [MB] (212 MBps) Copying: 649/1024 [MB] (218 MBps) Copying: 859/1024 [MB] (210 MBps) Copying: 1024/1024 [MB] (average 213 MBps) 00:31:40.214 00:31:40.472 Calculate MD5 checksum, iteration 2 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:40.472 17:22:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:40.472 [2024-07-25 17:22:32.799276] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:40.472 [2024-07-25 17:22:32.799450] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85671 ] 00:31:40.730 [2024-07-25 17:22:32.969861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.730 [2024-07-25 17:22:33.194745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.931  Copying: 485/1024 [MB] (485 MBps) Copying: 973/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:31:44.931 00:31:44.931 17:22:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:44.931 17:22:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.832 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:46.832 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=11637f23bbd342aa34a1e557c82954f8 00:31:46.832 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:46.832 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:46.832 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:47.091 [2024-07-25 17:22:39.445610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.091 [2024-07-25 17:22:39.445663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:47.091 [2024-07-25 17:22:39.445699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:47.091 [2024-07-25 17:22:39.445717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.091 [2024-07-25 17:22:39.445750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.091 [2024-07-25 17:22:39.445765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:47.091 [2024-07-25 17:22:39.445776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:47.091 [2024-07-25 17:22:39.445786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.091 [2024-07-25 17:22:39.445822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.091 [2024-07-25 17:22:39.445835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:47.091 [2024-07-25 17:22:39.445846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:47.091 [2024-07-25 17:22:39.445857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.091 [2024-07-25 17:22:39.445927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.300 ms, result 0 00:31:47.091 true 00:31:47.091 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:47.358 { 00:31:47.359 "name": "ftl", 00:31:47.359 "properties": [ 00:31:47.359 { 00:31:47.359 "name": "superblock_version", 00:31:47.359 "value": 5, 00:31:47.359 "read-only": true 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "name": "base_device", 00:31:47.359 "bands": [ 00:31:47.359 { 00:31:47.359 "id": 0, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 1, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 2, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 3, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 4, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 5, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 6, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 7, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 8, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 9, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 10, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 11, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 12, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 13, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 14, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 15, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 16, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 17, 00:31:47.359 "state": "FREE", 00:31:47.359 "validity": 0.0 00:31:47.359 } 00:31:47.359 ], 00:31:47.359 "read-only": true 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "name": "cache_device", 00:31:47.359 "type": "bdev", 00:31:47.359 "chunks": [ 00:31:47.359 { 00:31:47.359 "id": 0, 00:31:47.359 "state": "INACTIVE", 00:31:47.359 "utilization": 0.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 1, 00:31:47.359 "state": "CLOSED", 00:31:47.359 "utilization": 1.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 2, 00:31:47.359 "state": "CLOSED", 00:31:47.359 "utilization": 1.0 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 3, 00:31:47.359 "state": "OPEN", 00:31:47.359 "utilization": 0.001953125 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "id": 4, 00:31:47.359 "state": "OPEN", 00:31:47.359 "utilization": 0.0 00:31:47.359 } 00:31:47.359 ], 00:31:47.359 "read-only": true 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "name": "verbose_mode", 00:31:47.359 "value": true, 00:31:47.359 "unit": "", 00:31:47.359 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:47.359 }, 00:31:47.359 { 00:31:47.359 "name": "prep_upgrade_on_shutdown", 00:31:47.359 "value": false, 00:31:47.359 "unit": "", 00:31:47.359 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:47.359 } 00:31:47.359 ] 00:31:47.359 } 00:31:47.359 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:47.625 [2024-07-25 17:22:39.898048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.625 [2024-07-25 17:22:39.898307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:47.625 [2024-07-25 17:22:39.898427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:47.625 [2024-07-25 17:22:39.898476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.625 [2024-07-25 17:22:39.898611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.625 [2024-07-25 17:22:39.898692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:47.625 [2024-07-25 17:22:39.898734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:47.625 [2024-07-25 17:22:39.898894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.625 [2024-07-25 17:22:39.899039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.625 [2024-07-25 17:22:39.899091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:47.625 [2024-07-25 17:22:39.899252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:47.625 [2024-07-25 17:22:39.899295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.625 [2024-07-25 17:22:39.899399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.333 ms, result 0 00:31:47.625 true 00:31:47.625 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:47.625 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:47.625 17:22:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:47.883 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:47.883 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:47.883 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:48.141 [2024-07-25 17:22:40.386567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.141 [2024-07-25 17:22:40.386619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:48.141 [2024-07-25 17:22:40.386638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:48.141 [2024-07-25 17:22:40.386649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.141 [2024-07-25 17:22:40.386719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.141 [2024-07-25 17:22:40.386733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:48.141 [2024-07-25 17:22:40.386744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:48.141 [2024-07-25 17:22:40.386754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.141 [2024-07-25 17:22:40.386778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.141 [2024-07-25 17:22:40.386790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:48.141 [2024-07-25 17:22:40.386802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:48.141 [2024-07-25 17:22:40.386812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.141 [2024-07-25 17:22:40.386881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.300 ms, result 0 00:31:48.141 true 00:31:48.141 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:48.141 { 00:31:48.141 "name": "ftl", 00:31:48.141 "properties": [ 00:31:48.141 { 00:31:48.141 "name": "superblock_version", 00:31:48.141 "value": 5, 00:31:48.141 "read-only": true 00:31:48.141 }, 00:31:48.141 { 00:31:48.141 "name": "base_device", 00:31:48.141 "bands": [ 00:31:48.141 { 00:31:48.141 "id": 0, 00:31:48.141 "state": "FREE", 00:31:48.141 "validity": 0.0 00:31:48.141 }, 00:31:48.141 { 00:31:48.141 "id": 1, 00:31:48.141 "state": "FREE", 00:31:48.141 "validity": 0.0 00:31:48.141 }, 00:31:48.141 { 00:31:48.141 "id": 2, 00:31:48.141 "state": "FREE", 00:31:48.141 "validity": 0.0 00:31:48.141 }, 00:31:48.141 { 00:31:48.141 "id": 3, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 4, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 5, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 6, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 7, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 8, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 9, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 10, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 11, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 12, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 13, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 14, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 15, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 16, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 17, 00:31:48.142 "state": "FREE", 00:31:48.142 "validity": 0.0 00:31:48.142 } 00:31:48.142 ], 00:31:48.142 "read-only": true 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "name": "cache_device", 00:31:48.142 "type": "bdev", 00:31:48.142 "chunks": [ 00:31:48.142 { 00:31:48.142 "id": 0, 00:31:48.142 "state": "INACTIVE", 00:31:48.142 "utilization": 0.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 1, 00:31:48.142 "state": "CLOSED", 00:31:48.142 "utilization": 1.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 2, 00:31:48.142 "state": "CLOSED", 00:31:48.142 "utilization": 1.0 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 3, 00:31:48.142 "state": "OPEN", 00:31:48.142 "utilization": 0.001953125 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "id": 4, 00:31:48.142 "state": "OPEN", 00:31:48.142 "utilization": 0.0 00:31:48.142 } 00:31:48.142 ], 00:31:48.142 "read-only": true 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "name": "verbose_mode", 00:31:48.142 "value": true, 00:31:48.142 "unit": "", 00:31:48.142 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:48.142 }, 00:31:48.142 { 00:31:48.142 "name": "prep_upgrade_on_shutdown", 00:31:48.142 "value": true, 00:31:48.142 "unit": "", 00:31:48.142 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:48.142 } 00:31:48.142 ] 00:31:48.142 } 00:31:48.142 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:48.142 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85292 ]] 00:31:48.142 17:22:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85292 00:31:48.142 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85292 ']' 00:31:48.142 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85292 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85292 00:31:48.400 killing process with pid 85292 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85292' 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85292 00:31:48.400 17:22:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85292 00:31:49.333 [2024-07-25 17:22:41.450387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:49.333 [2024-07-25 17:22:41.467446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.333 [2024-07-25 17:22:41.467490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:49.333 [2024-07-25 17:22:41.467525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:49.333 [2024-07-25 17:22:41.467536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.333 [2024-07-25 17:22:41.467563] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:49.333 [2024-07-25 17:22:41.471258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.333 [2024-07-25 17:22:41.471304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:49.333 [2024-07-25 17:22:41.471341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.675 ms 00:31:49.333 [2024-07-25 17:22:41.471351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.776713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.776774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:57.446 [2024-07-25 17:22:49.776810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8305.378 ms 00:31:57.446 [2024-07-25 17:22:49.776821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.778198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.778230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:57.446 [2024-07-25 17:22:49.778246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.356 ms 00:31:57.446 [2024-07-25 17:22:49.778257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.779463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.779486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:57.446 [2024-07-25 17:22:49.779506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.168 ms 00:31:57.446 [2024-07-25 17:22:49.779516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.790986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.791052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:57.446 [2024-07-25 17:22:49.791083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.433 ms 00:31:57.446 [2024-07-25 17:22:49.791093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.797977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.798045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:57.446 [2024-07-25 17:22:49.798075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.848 ms 00:31:57.446 [2024-07-25 17:22:49.798086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.798196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.446 [2024-07-25 17:22:49.798216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:57.446 [2024-07-25 17:22:49.798228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:31:57.446 [2024-07-25 17:22:49.798238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.446 [2024-07-25 17:22:49.808545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.808590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:31:57.447 [2024-07-25 17:22:49.808619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.288 ms 00:31:57.447 [2024-07-25 17:22:49.808628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.819109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.819145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:31:57.447 [2024-07-25 17:22:49.819173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.446 ms 00:31:57.447 [2024-07-25 17:22:49.819182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.829265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.829299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:57.447 [2024-07-25 17:22:49.829327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.049 ms 00:31:57.447 [2024-07-25 17:22:49.829336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.839406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.839439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:57.447 [2024-07-25 17:22:49.839467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.006 ms 00:31:57.447 [2024-07-25 17:22:49.839476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.839510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:57.447 [2024-07-25 17:22:49.839529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:57.447 [2024-07-25 17:22:49.839542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:57.447 [2024-07-25 17:22:49.839553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:57.447 [2024-07-25 17:22:49.839564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:57.447 [2024-07-25 17:22:49.839736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:57.447 [2024-07-25 17:22:49.839746] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 910dd370-4ca2-438f-b595-e5eb41cd831c 00:31:57.447 [2024-07-25 17:22:49.839756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:57.447 [2024-07-25 17:22:49.839764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:57.447 [2024-07-25 17:22:49.839779] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:57.447 [2024-07-25 17:22:49.839789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:57.447 [2024-07-25 17:22:49.839798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:57.447 [2024-07-25 17:22:49.839808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:57.447 [2024-07-25 17:22:49.839817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:57.447 [2024-07-25 17:22:49.839826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:57.447 [2024-07-25 17:22:49.839835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:57.447 [2024-07-25 17:22:49.839844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.839854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:57.447 [2024-07-25 17:22:49.839865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:31:57.447 [2024-07-25 17:22:49.839876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.854098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.854138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:57.447 [2024-07-25 17:22:49.854168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.202 ms 00:31:57.447 [2024-07-25 17:22:49.854178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.854574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:57.447 [2024-07-25 17:22:49.854588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:57.447 [2024-07-25 17:22:49.854599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:31:57.447 [2024-07-25 17:22:49.854609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.898110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.447 [2024-07-25 17:22:49.898149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:57.447 [2024-07-25 17:22:49.898179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.447 [2024-07-25 17:22:49.898189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.898225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.447 [2024-07-25 17:22:49.898238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:57.447 [2024-07-25 17:22:49.898248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.447 [2024-07-25 17:22:49.898264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.898345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.447 [2024-07-25 17:22:49.898369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:57.447 [2024-07-25 17:22:49.898380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.447 [2024-07-25 17:22:49.898390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.447 [2024-07-25 17:22:49.898420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.447 [2024-07-25 17:22:49.898432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:57.447 [2024-07-25 17:22:49.898443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.447 [2024-07-25 17:22:49.898452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:49.981117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:49.981177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:57.706 [2024-07-25 17:22:49.981218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:49.981229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:57.706 [2024-07-25 17:22:50.059380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:57.706 [2024-07-25 17:22:50.059559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:57.706 [2024-07-25 17:22:50.059653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:57.706 [2024-07-25 17:22:50.059805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:57.706 [2024-07-25 17:22:50.059891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.059945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.059959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:57.706 [2024-07-25 17:22:50.059970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.706 [2024-07-25 17:22:50.059987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.706 [2024-07-25 17:22:50.060103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:57.706 [2024-07-25 17:22:50.060119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:57.706 [2024-07-25 17:22:50.060147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:57.707 [2024-07-25 17:22:50.060159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:57.707 [2024-07-25 17:22:50.060303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8592.870 ms, result 0 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85876 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85876 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85876 ']' 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.045 17:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:01.045 [2024-07-25 17:22:53.151466] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:01.045 [2024-07-25 17:22:53.151651] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85876 ] 00:32:01.045 [2024-07-25 17:22:53.325923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.304 [2024-07-25 17:22:53.535247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.870 [2024-07-25 17:22:54.315287] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:01.870 [2024-07-25 17:22:54.315374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:02.129 [2024-07-25 17:22:54.461176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.461219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:02.129 [2024-07-25 17:22:54.461253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:02.129 [2024-07-25 17:22:54.461263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.461321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.461337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:02.129 [2024-07-25 17:22:54.461348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:02.129 [2024-07-25 17:22:54.461358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.461391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:02.129 [2024-07-25 17:22:54.462209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:02.129 [2024-07-25 17:22:54.462244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.462257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:02.129 [2024-07-25 17:22:54.462269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:32:02.129 [2024-07-25 17:22:54.462283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.464357] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:02.129 [2024-07-25 17:22:54.478167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.478204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:02.129 [2024-07-25 17:22:54.478236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.811 ms 00:32:02.129 [2024-07-25 17:22:54.478245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.478310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.478326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:02.129 [2024-07-25 17:22:54.478337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:02.129 [2024-07-25 17:22:54.478346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.486984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.487017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:02.129 [2024-07-25 17:22:54.487046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.561 ms 00:32:02.129 [2024-07-25 17:22:54.487056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.487136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.487153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:02.129 [2024-07-25 17:22:54.487168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:02.129 [2024-07-25 17:22:54.487179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.487257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.487273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:02.129 [2024-07-25 17:22:54.487285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:02.129 [2024-07-25 17:22:54.487295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.487327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:02.129 [2024-07-25 17:22:54.491971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.492030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:02.129 [2024-07-25 17:22:54.492059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.652 ms 00:32:02.129 [2024-07-25 17:22:54.492069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.492105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.492120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:02.129 [2024-07-25 17:22:54.492135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:02.129 [2024-07-25 17:22:54.492145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.129 [2024-07-25 17:22:54.492190] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:02.129 [2024-07-25 17:22:54.492219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:02.129 [2024-07-25 17:22:54.492255] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:02.129 [2024-07-25 17:22:54.492273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:32:02.129 [2024-07-25 17:22:54.492396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:02.129 [2024-07-25 17:22:54.492415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:02.129 [2024-07-25 17:22:54.492428] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:32:02.129 [2024-07-25 17:22:54.492442] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:02.129 [2024-07-25 17:22:54.492454] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:02.129 [2024-07-25 17:22:54.492465] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:02.129 [2024-07-25 17:22:54.492475] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:02.129 [2024-07-25 17:22:54.492485] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:02.129 [2024-07-25 17:22:54.492495] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:02.129 [2024-07-25 17:22:54.492506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.129 [2024-07-25 17:22:54.492516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:02.130 [2024-07-25 17:22:54.492526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:32:02.130 [2024-07-25 17:22:54.492540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.130 [2024-07-25 17:22:54.492622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.130 [2024-07-25 17:22:54.492645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:02.130 [2024-07-25 17:22:54.492656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:32:02.130 [2024-07-25 17:22:54.492666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.130 [2024-07-25 17:22:54.492769] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:02.130 [2024-07-25 17:22:54.492784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:02.130 [2024-07-25 17:22:54.492796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.130 [2024-07-25 17:22:54.492807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:02.130 [2024-07-25 17:22:54.492832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:02.130 [2024-07-25 17:22:54.492853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:02.130 [2024-07-25 17:22:54.492863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:02.130 [2024-07-25 17:22:54.492873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:02.130 [2024-07-25 17:22:54.492893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:02.130 [2024-07-25 17:22:54.492902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:02.130 [2024-07-25 17:22:54.492936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:02.130 [2024-07-25 17:22:54.492945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:02.130 [2024-07-25 17:22:54.492964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:02.130 [2024-07-25 17:22:54.492973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.492982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:02.130 [2024-07-25 17:22:54.492992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:02.130 [2024-07-25 17:22:54.493020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:02.130 [2024-07-25 17:22:54.493108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:02.130 [2024-07-25 17:22:54.493153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:02.130 [2024-07-25 17:22:54.493182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:02.130 [2024-07-25 17:22:54.493213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:02.130 [2024-07-25 17:22:54.493242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:02.130 [2024-07-25 17:22:54.493272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:02.130 [2024-07-25 17:22:54.493282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493291] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:02.130 [2024-07-25 17:22:54.493302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:02.130 [2024-07-25 17:22:54.493312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493323] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:02.130 [2024-07-25 17:22:54.493333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:02.130 [2024-07-25 17:22:54.493343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:02.130 [2024-07-25 17:22:54.493352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:02.130 [2024-07-25 17:22:54.493362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:02.130 [2024-07-25 17:22:54.493384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:02.130 [2024-07-25 17:22:54.493394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:02.130 [2024-07-25 17:22:54.493421] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:02.130 [2024-07-25 17:22:54.493449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:02.130 [2024-07-25 17:22:54.493471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:02.130 [2024-07-25 17:22:54.493502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:02.130 [2024-07-25 17:22:54.493512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:02.130 [2024-07-25 17:22:54.493522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:02.130 [2024-07-25 17:22:54.493532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:02.130 [2024-07-25 17:22:54.493604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:02.130 [2024-07-25 17:22:54.493617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:02.130 [2024-07-25 17:22:54.493640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:02.130 [2024-07-25 17:22:54.493651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:02.130 [2024-07-25 17:22:54.493661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:02.130 [2024-07-25 17:22:54.493673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.130 [2024-07-25 17:22:54.493683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:02.130 [2024-07-25 17:22:54.493694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.962 ms 00:32:02.130 [2024-07-25 17:22:54.493709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.130 [2024-07-25 17:22:54.493766] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:02.130 [2024-07-25 17:22:54.493782] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:05.416 [2024-07-25 17:22:57.492654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.492718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:05.416 [2024-07-25 17:22:57.492755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2998.903 ms 00:32:05.416 [2024-07-25 17:22:57.492772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.527764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.527819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:05.416 [2024-07-25 17:22:57.527854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.698 ms 00:32:05.416 [2024-07-25 17:22:57.527865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.528012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.528032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:05.416 [2024-07-25 17:22:57.528045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:05.416 [2024-07-25 17:22:57.528057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.565824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.565868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:05.416 [2024-07-25 17:22:57.565900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.709 ms 00:32:05.416 [2024-07-25 17:22:57.565911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.565973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.566200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:05.416 [2024-07-25 17:22:57.566257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:05.416 [2024-07-25 17:22:57.566273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.567007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.567037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:05.416 [2024-07-25 17:22:57.567051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.619 ms 00:32:05.416 [2024-07-25 17:22:57.567077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.567132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.567146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:05.416 [2024-07-25 17:22:57.567157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:05.416 [2024-07-25 17:22:57.567167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.586019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.586057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:05.416 [2024-07-25 17:22:57.586088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.825 ms 00:32:05.416 [2024-07-25 17:22:57.586106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.600721] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:05.416 [2024-07-25 17:22:57.600762] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:05.416 [2024-07-25 17:22:57.600794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.600805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:32:05.416 [2024-07-25 17:22:57.600816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.562 ms 00:32:05.416 [2024-07-25 17:22:57.600827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.616054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.616092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:32:05.416 [2024-07-25 17:22:57.616124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.167 ms 00:32:05.416 [2024-07-25 17:22:57.616135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.629020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.629075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:32:05.416 [2024-07-25 17:22:57.629106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.834 ms 00:32:05.416 [2024-07-25 17:22:57.629117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.641817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.641855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:32:05.416 [2024-07-25 17:22:57.641885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.659 ms 00:32:05.416 [2024-07-25 17:22:57.641895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.642727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.642758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:05.416 [2024-07-25 17:22:57.642790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.702 ms 00:32:05.416 [2024-07-25 17:22:57.642800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.725203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.725269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:05.416 [2024-07-25 17:22:57.725304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.375 ms 00:32:05.416 [2024-07-25 17:22:57.725316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.735787] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:05.416 [2024-07-25 17:22:57.736735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.736920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:05.416 [2024-07-25 17:22:57.736952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.354 ms 00:32:05.416 [2024-07-25 17:22:57.736964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.737102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.737123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:32:05.416 [2024-07-25 17:22:57.737136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:05.416 [2024-07-25 17:22:57.737146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.737243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.737261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:05.416 [2024-07-25 17:22:57.737272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:05.416 [2024-07-25 17:22:57.737288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.737322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.737336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:05.416 [2024-07-25 17:22:57.737348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:05.416 [2024-07-25 17:22:57.737358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.737394] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:05.416 [2024-07-25 17:22:57.737410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.737437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:05.416 [2024-07-25 17:22:57.737447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:05.416 [2024-07-25 17:22:57.737457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.763924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.763963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:05.416 [2024-07-25 17:22:57.764009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.435 ms 00:32:05.416 [2024-07-25 17:22:57.764022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.764121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.416 [2024-07-25 17:22:57.764139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:05.416 [2024-07-25 17:22:57.764151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:32:05.416 [2024-07-25 17:22:57.764169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.416 [2024-07-25 17:22:57.765927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3304.133 ms, result 0 00:32:05.416 [2024-07-25 17:22:57.780342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.416 [2024-07-25 17:22:57.796360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:05.416 [2024-07-25 17:22:57.804449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:05.416 17:22:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.416 17:22:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:32:05.416 17:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:05.416 17:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:05.416 17:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:05.675 [2024-07-25 17:22:58.076523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.675 [2024-07-25 17:22:58.076576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:05.675 [2024-07-25 17:22:58.076615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:05.675 [2024-07-25 17:22:58.076626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.675 [2024-07-25 17:22:58.076655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.675 [2024-07-25 17:22:58.076670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:05.675 [2024-07-25 17:22:58.076681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:05.675 [2024-07-25 17:22:58.076692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.675 [2024-07-25 17:22:58.076716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:05.675 [2024-07-25 17:22:58.076728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:05.675 [2024-07-25 17:22:58.076739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:05.675 [2024-07-25 17:22:58.076754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:05.675 [2024-07-25 17:22:58.076819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.281 ms, result 0 00:32:05.675 true 00:32:05.675 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:05.933 { 00:32:05.933 "name": "ftl", 00:32:05.934 "properties": [ 00:32:05.934 { 00:32:05.934 "name": "superblock_version", 00:32:05.934 "value": 5, 00:32:05.934 "read-only": true 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "name": "base_device", 00:32:05.934 "bands": [ 00:32:05.934 { 00:32:05.934 "id": 0, 00:32:05.934 "state": "CLOSED", 00:32:05.934 "validity": 1.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 1, 00:32:05.934 "state": "CLOSED", 00:32:05.934 "validity": 1.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 2, 00:32:05.934 "state": "CLOSED", 00:32:05.934 "validity": 0.007843137254901933 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 3, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 4, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 5, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 6, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 7, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 8, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 9, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 10, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 11, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 12, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 13, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 14, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 15, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 16, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 17, 00:32:05.934 "state": "FREE", 00:32:05.934 "validity": 0.0 00:32:05.934 } 00:32:05.934 ], 00:32:05.934 "read-only": true 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "name": "cache_device", 00:32:05.934 "type": "bdev", 00:32:05.934 "chunks": [ 00:32:05.934 { 00:32:05.934 "id": 0, 00:32:05.934 "state": "INACTIVE", 00:32:05.934 "utilization": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 1, 00:32:05.934 "state": "OPEN", 00:32:05.934 "utilization": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 2, 00:32:05.934 "state": "OPEN", 00:32:05.934 "utilization": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 3, 00:32:05.934 "state": "FREE", 00:32:05.934 "utilization": 0.0 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "id": 4, 00:32:05.934 "state": "FREE", 00:32:05.934 "utilization": 0.0 00:32:05.934 } 00:32:05.934 ], 00:32:05.934 "read-only": true 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "name": "verbose_mode", 00:32:05.934 "value": true, 00:32:05.934 "unit": "", 00:32:05.934 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:05.934 }, 00:32:05.934 { 00:32:05.934 "name": "prep_upgrade_on_shutdown", 00:32:05.934 "value": false, 00:32:05.934 "unit": "", 00:32:05.934 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:05.934 } 00:32:05.934 ] 00:32:05.934 } 00:32:05.934 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:32:05.934 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:05.934 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:06.193 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:32:06.193 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:32:06.193 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:32:06.193 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:06.193 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:32:06.451 Validate MD5 checksum, iteration 1 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:06.451 17:22:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:06.451 [2024-07-25 17:22:58.899352] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:06.451 [2024-07-25 17:22:58.899729] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85945 ] 00:32:06.709 [2024-07-25 17:22:59.063258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.968 [2024-07-25 17:22:59.348590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.098  Copying: 516/1024 [MB] (516 MBps) Copying: 1019/1024 [MB] (503 MBps) Copying: 1024/1024 [MB] (average 509 MBps) 00:32:11.098 00:32:11.355 17:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:11.355 17:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:13.255 Validate MD5 checksum, iteration 2 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1cdb3a15ee03140c3b4672b25f70146 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1cdb3a15ee03140c3b4672b25f70146 != \f\1\c\d\b\3\a\1\5\e\e\0\3\1\4\0\c\3\b\4\6\7\2\b\2\5\f\7\0\1\4\6 ]] 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:13.255 17:23:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:13.255 [2024-07-25 17:23:05.538513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:13.255 [2024-07-25 17:23:05.538674] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86018 ] 00:32:13.255 [2024-07-25 17:23:05.701993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.513 [2024-07-25 17:23:05.967426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.877  Copying: 482/1024 [MB] (482 MBps) Copying: 986/1024 [MB] (504 MBps) Copying: 1024/1024 [MB] (average 492 MBps) 00:32:18.877 00:32:18.877 17:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:18.877 17:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:20.250 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=11637f23bbd342aa34a1e557c82954f8 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 11637f23bbd342aa34a1e557c82954f8 != \1\1\6\3\7\f\2\3\b\b\d\3\4\2\a\a\3\4\a\1\e\5\5\7\c\8\2\9\5\4\f\8 ]] 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85876 ]] 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85876 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86096 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86096 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 86096 ']' 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.508 17:23:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:20.508 [2024-07-25 17:23:12.848585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:20.508 [2024-07-25 17:23:12.848774] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86096 ] 00:32:20.508 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 85876 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:20.766 [2024-07-25 17:23:13.017426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.766 [2024-07-25 17:23:13.201084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.696 [2024-07-25 17:23:14.016336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:21.697 [2024-07-25 17:23:14.016426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:21.697 [2024-07-25 17:23:14.162519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.697 [2024-07-25 17:23:14.162566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:21.697 [2024-07-25 17:23:14.162602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:21.697 [2024-07-25 17:23:14.162613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.697 [2024-07-25 17:23:14.162701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.697 [2024-07-25 17:23:14.162721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:21.697 [2024-07-25 17:23:14.162733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:32:21.697 [2024-07-25 17:23:14.162743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.697 [2024-07-25 17:23:14.162779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:21.697 [2024-07-25 17:23:14.163760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:21.697 [2024-07-25 17:23:14.163811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.697 [2024-07-25 17:23:14.163825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:21.697 [2024-07-25 17:23:14.163836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:32:21.697 [2024-07-25 17:23:14.163852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.697 [2024-07-25 17:23:14.164393] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:21.955 [2024-07-25 17:23:14.183175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.183218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:21.955 [2024-07-25 17:23:14.183258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.783 ms 00:32:21.955 [2024-07-25 17:23:14.183268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.192953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.193029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:21.955 [2024-07-25 17:23:14.193063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:32:21.955 [2024-07-25 17:23:14.193073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.193576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.193643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:21.955 [2024-07-25 17:23:14.193657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:32:21.955 [2024-07-25 17:23:14.193667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.193729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.193747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:21.955 [2024-07-25 17:23:14.193757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:32:21.955 [2024-07-25 17:23:14.193766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.193809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.193824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:21.955 [2024-07-25 17:23:14.193839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:21.955 [2024-07-25 17:23:14.193864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.193893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:21.955 [2024-07-25 17:23:14.197349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.197386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:21.955 [2024-07-25 17:23:14.197400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.462 ms 00:32:21.955 [2024-07-25 17:23:14.197410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.197442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.197456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:21.955 [2024-07-25 17:23:14.197466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:21.955 [2024-07-25 17:23:14.197476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.197519] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:21.955 [2024-07-25 17:23:14.197548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:21.955 [2024-07-25 17:23:14.197586] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:21.955 [2024-07-25 17:23:14.197603] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:32:21.955 [2024-07-25 17:23:14.197688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:21.955 [2024-07-25 17:23:14.197701] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:21.955 [2024-07-25 17:23:14.197713] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:32:21.955 [2024-07-25 17:23:14.197726] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:21.955 [2024-07-25 17:23:14.197738] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:21.955 [2024-07-25 17:23:14.197748] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:21.955 [2024-07-25 17:23:14.197763] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:21.955 [2024-07-25 17:23:14.197771] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:21.955 [2024-07-25 17:23:14.197780] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:21.955 [2024-07-25 17:23:14.197790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.197803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:21.955 [2024-07-25 17:23:14.197813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:32:21.955 [2024-07-25 17:23:14.197822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.197895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.955 [2024-07-25 17:23:14.197908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:21.955 [2024-07-25 17:23:14.197918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:32:21.955 [2024-07-25 17:23:14.197931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.955 [2024-07-25 17:23:14.198058] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:21.955 [2024-07-25 17:23:14.198076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:21.955 [2024-07-25 17:23:14.198088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.955 [2024-07-25 17:23:14.198098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:21.955 [2024-07-25 17:23:14.198117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:21.955 [2024-07-25 17:23:14.198135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:21.955 [2024-07-25 17:23:14.198146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:21.955 [2024-07-25 17:23:14.198156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:21.955 [2024-07-25 17:23:14.198173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:21.955 [2024-07-25 17:23:14.198182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:21.955 [2024-07-25 17:23:14.198200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:21.955 [2024-07-25 17:23:14.198209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:21.955 [2024-07-25 17:23:14.198226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:21.955 [2024-07-25 17:23:14.198234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.955 [2024-07-25 17:23:14.198243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:21.955 [2024-07-25 17:23:14.198252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:21.955 [2024-07-25 17:23:14.198262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.955 [2024-07-25 17:23:14.198272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:21.955 [2024-07-25 17:23:14.198281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:21.955 [2024-07-25 17:23:14.198290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.955 [2024-07-25 17:23:14.198299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:21.956 [2024-07-25 17:23:14.198308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:21.956 [2024-07-25 17:23:14.198333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.956 [2024-07-25 17:23:14.198342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:21.956 [2024-07-25 17:23:14.198352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:21.956 [2024-07-25 17:23:14.198360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.956 [2024-07-25 17:23:14.198369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:21.956 [2024-07-25 17:23:14.198378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:21.956 [2024-07-25 17:23:14.198387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:21.956 [2024-07-25 17:23:14.198404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:21.956 [2024-07-25 17:23:14.198413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:21.956 [2024-07-25 17:23:14.198430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:21.956 [2024-07-25 17:23:14.198456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:21.956 [2024-07-25 17:23:14.198465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198473] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:21.956 [2024-07-25 17:23:14.198485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:21.956 [2024-07-25 17:23:14.198494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.956 [2024-07-25 17:23:14.198504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.956 [2024-07-25 17:23:14.198513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:21.956 [2024-07-25 17:23:14.198522] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:21.956 [2024-07-25 17:23:14.198544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:21.956 [2024-07-25 17:23:14.198554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:21.956 [2024-07-25 17:23:14.198563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:21.956 [2024-07-25 17:23:14.198572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:21.956 [2024-07-25 17:23:14.198584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:21.956 [2024-07-25 17:23:14.198600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:21.956 [2024-07-25 17:23:14.198621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:21.956 [2024-07-25 17:23:14.198649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:21.956 [2024-07-25 17:23:14.198670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:21.956 [2024-07-25 17:23:14.198682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:21.956 [2024-07-25 17:23:14.198692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:21.956 [2024-07-25 17:23:14.198759] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:21.956 [2024-07-25 17:23:14.198770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:21.956 [2024-07-25 17:23:14.198791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:21.956 [2024-07-25 17:23:14.198801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:21.956 [2024-07-25 17:23:14.198811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:21.956 [2024-07-25 17:23:14.198822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.198833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:21.956 [2024-07-25 17:23:14.198843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:32:21.956 [2024-07-25 17:23:14.198852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.234917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.235252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:21.956 [2024-07-25 17:23:14.235375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.001 ms 00:32:21.956 [2024-07-25 17:23:14.235507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.235615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.235722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:21.956 [2024-07-25 17:23:14.235820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:21.956 [2024-07-25 17:23:14.235957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.273517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.273757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:21.956 [2024-07-25 17:23:14.273899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.411 ms 00:32:21.956 [2024-07-25 17:23:14.274043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.274220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.274329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:21.956 [2024-07-25 17:23:14.274448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:21.956 [2024-07-25 17:23:14.274495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.274758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.274814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:21.956 [2024-07-25 17:23:14.274958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:32:21.956 [2024-07-25 17:23:14.275030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.275145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.275195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:21.956 [2024-07-25 17:23:14.275327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:21.956 [2024-07-25 17:23:14.275373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.294432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.294667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:21.956 [2024-07-25 17:23:14.294801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.917 ms 00:32:21.956 [2024-07-25 17:23:14.294850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.295119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.295248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:21.956 [2024-07-25 17:23:14.295358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:21.956 [2024-07-25 17:23:14.295380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.324924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.324969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:21.956 [2024-07-25 17:23:14.325021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.505 ms 00:32:21.956 [2024-07-25 17:23:14.325034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.335962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.336011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:21.956 [2024-07-25 17:23:14.336044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:32:21.956 [2024-07-25 17:23:14.336054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.401577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.956 [2024-07-25 17:23:14.401644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:21.956 [2024-07-25 17:23:14.401679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.455 ms 00:32:21.956 [2024-07-25 17:23:14.401690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.956 [2024-07-25 17:23:14.401899] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:21.956 [2024-07-25 17:23:14.402100] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:21.956 [2024-07-25 17:23:14.402282] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:21.956 [2024-07-25 17:23:14.402424] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:21.957 [2024-07-25 17:23:14.402437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.957 [2024-07-25 17:23:14.402448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:21.957 [2024-07-25 17:23:14.402482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:32:21.957 [2024-07-25 17:23:14.402493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.957 [2024-07-25 17:23:14.402606] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:21.957 [2024-07-25 17:23:14.402633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.957 [2024-07-25 17:23:14.402645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:21.957 [2024-07-25 17:23:14.402699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:21.957 [2024-07-25 17:23:14.402714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.214 [2024-07-25 17:23:14.421776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.214 [2024-07-25 17:23:14.421815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:22.214 [2024-07-25 17:23:14.421847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.024 ms 00:32:22.214 [2024-07-25 17:23:14.421868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.214 [2024-07-25 17:23:14.433223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:22.214 [2024-07-25 17:23:14.433265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:22.214 [2024-07-25 17:23:14.433281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:22.214 [2024-07-25 17:23:14.433298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:22.214 [2024-07-25 17:23:14.433728] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:22.779 [2024-07-25 17:23:15.052926] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:22.779 [2024-07-25 17:23:15.053256] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:23.344 [2024-07-25 17:23:15.665182] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:23.344 [2024-07-25 17:23:15.665331] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:23.344 [2024-07-25 17:23:15.665368] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:23.344 [2024-07-25 17:23:15.665384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.665396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:23.344 [2024-07-25 17:23:15.665428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1231.948 ms 00:32:23.344 [2024-07-25 17:23:15.665454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.665496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.665510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:23.344 [2024-07-25 17:23:15.665521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:23.344 [2024-07-25 17:23:15.665532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.676959] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:23.344 [2024-07-25 17:23:15.677151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.677169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:23.344 [2024-07-25 17:23:15.677182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.589 ms 00:32:23.344 [2024-07-25 17:23:15.677192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.677912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.677939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:23.344 [2024-07-25 17:23:15.677952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.621 ms 00:32:23.344 [2024-07-25 17:23:15.677962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.680187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.680215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:23.344 [2024-07-25 17:23:15.680244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.197 ms 00:32:23.344 [2024-07-25 17:23:15.680254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.680299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.680313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:23.344 [2024-07-25 17:23:15.680324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:23.344 [2024-07-25 17:23:15.680334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.680464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.680482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:23.344 [2024-07-25 17:23:15.680493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:23.344 [2024-07-25 17:23:15.680503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.344 [2024-07-25 17:23:15.680529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.344 [2024-07-25 17:23:15.680541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:23.344 [2024-07-25 17:23:15.680551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:23.345 [2024-07-25 17:23:15.680560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.345 [2024-07-25 17:23:15.680596] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:23.345 [2024-07-25 17:23:15.680610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.345 [2024-07-25 17:23:15.680620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:23.345 [2024-07-25 17:23:15.680634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:23.345 [2024-07-25 17:23:15.680644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.345 [2024-07-25 17:23:15.680697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.345 [2024-07-25 17:23:15.680711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:23.345 [2024-07-25 17:23:15.680721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:23.345 [2024-07-25 17:23:15.680730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.345 [2024-07-25 17:23:15.682318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1519.079 ms, result 0 00:32:23.345 [2024-07-25 17:23:15.697659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.345 [2024-07-25 17:23:15.713649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:23.345 [2024-07-25 17:23:15.723155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:23.345 Validate MD5 checksum, iteration 1 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:23.345 17:23:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:23.602 [2024-07-25 17:23:15.833051] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:23.603 [2024-07-25 17:23:15.833420] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86131 ] 00:32:23.603 [2024-07-25 17:23:15.994579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.860 [2024-07-25 17:23:16.241273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.132  Copying: 490/1024 [MB] (490 MBps) Copying: 975/1024 [MB] (485 MBps) Copying: 1024/1024 [MB] (average 487 MBps) 00:32:28.132 00:32:28.132 17:23:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:28.132 17:23:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:30.033 Validate MD5 checksum, iteration 2 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1cdb3a15ee03140c3b4672b25f70146 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1cdb3a15ee03140c3b4672b25f70146 != \f\1\c\d\b\3\a\1\5\e\e\0\3\1\4\0\c\3\b\4\6\7\2\b\2\5\f\7\0\1\4\6 ]] 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:30.033 17:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:30.033 [2024-07-25 17:23:22.455180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:30.033 [2024-07-25 17:23:22.455321] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86198 ] 00:32:30.291 [2024-07-25 17:23:22.614696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.549 [2024-07-25 17:23:22.855629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.646  Copying: 487/1024 [MB] (487 MBps) Copying: 983/1024 [MB] (496 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:32:34.646 00:32:34.905 17:23:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:34.905 17:23:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=11637f23bbd342aa34a1e557c82954f8 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 11637f23bbd342aa34a1e557c82954f8 != \1\1\6\3\7\f\2\3\b\b\d\3\4\2\a\a\3\4\a\1\e\5\5\7\c\8\2\9\5\4\f\8 ]] 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86096 ]] 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86096 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 86096 ']' 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 86096 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86096 00:32:36.832 killing process with pid 86096 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86096' 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 86096 00:32:36.832 17:23:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 86096 00:32:37.768 [2024-07-25 17:23:30.042205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:37.768 [2024-07-25 17:23:30.059498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.059716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:37.768 [2024-07-25 17:23:30.059860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:37.768 [2024-07-25 17:23:30.059910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.060088] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:37.768 [2024-07-25 17:23:30.063524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.063719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:37.768 [2024-07-25 17:23:30.063750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.351 ms 00:32:37.768 [2024-07-25 17:23:30.063761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.064074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.064095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:37.768 [2024-07-25 17:23:30.064108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:32:37.768 [2024-07-25 17:23:30.064119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.065273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.065305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:37.768 [2024-07-25 17:23:30.065320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:32:37.768 [2024-07-25 17:23:30.065338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.066458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.066492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:37.768 [2024-07-25 17:23:30.066506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:32:37.768 [2024-07-25 17:23:30.066515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.077285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.077323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:37.768 [2024-07-25 17:23:30.077361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.704 ms 00:32:37.768 [2024-07-25 17:23:30.077372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.083285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.083323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:37.768 [2024-07-25 17:23:30.083354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.874 ms 00:32:37.768 [2024-07-25 17:23:30.083365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.083457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.083481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:37.768 [2024-07-25 17:23:30.083493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:37.768 [2024-07-25 17:23:30.083507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.093986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.094019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:32:37.768 [2024-07-25 17:23:30.094049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.459 ms 00:32:37.768 [2024-07-25 17:23:30.094058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.104572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.104608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:32:37.768 [2024-07-25 17:23:30.104639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.477 ms 00:32:37.768 [2024-07-25 17:23:30.104648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.114804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.114840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:37.768 [2024-07-25 17:23:30.114854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.119 ms 00:32:37.768 [2024-07-25 17:23:30.114864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.125083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.125116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:37.768 [2024-07-25 17:23:30.125145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.155 ms 00:32:37.768 [2024-07-25 17:23:30.125154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.125191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:37.768 [2024-07-25 17:23:30.125214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:37.768 [2024-07-25 17:23:30.125227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:37.768 [2024-07-25 17:23:30.125237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:37.768 [2024-07-25 17:23:30.125248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:37.768 [2024-07-25 17:23:30.125413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:37.768 [2024-07-25 17:23:30.125422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 910dd370-4ca2-438f-b595-e5eb41cd831c 00:32:37.768 [2024-07-25 17:23:30.125433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:37.768 [2024-07-25 17:23:30.125442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:37.768 [2024-07-25 17:23:30.125451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:37.768 [2024-07-25 17:23:30.125461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:37.768 [2024-07-25 17:23:30.125470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:37.768 [2024-07-25 17:23:30.125479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:37.768 [2024-07-25 17:23:30.125493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:37.768 [2024-07-25 17:23:30.125501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:37.768 [2024-07-25 17:23:30.125510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:37.768 [2024-07-25 17:23:30.125520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.125530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:37.768 [2024-07-25 17:23:30.125543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:32:37.768 [2024-07-25 17:23:30.125553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.768 [2024-07-25 17:23:30.140274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.768 [2024-07-25 17:23:30.140309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:37.768 [2024-07-25 17:23:30.140340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.700 ms 00:32:37.769 [2024-07-25 17:23:30.140357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.769 [2024-07-25 17:23:30.140760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:37.769 [2024-07-25 17:23:30.140780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:37.769 [2024-07-25 17:23:30.140792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:32:37.769 [2024-07-25 17:23:30.140802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.769 [2024-07-25 17:23:30.189236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:37.769 [2024-07-25 17:23:30.189280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:37.769 [2024-07-25 17:23:30.189312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:37.769 [2024-07-25 17:23:30.189329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.769 [2024-07-25 17:23:30.189368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:37.769 [2024-07-25 17:23:30.189382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:37.769 [2024-07-25 17:23:30.189392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:37.769 [2024-07-25 17:23:30.189402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.769 [2024-07-25 17:23:30.189504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:37.769 [2024-07-25 17:23:30.189523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:37.769 [2024-07-25 17:23:30.189535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:37.769 [2024-07-25 17:23:30.189545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:37.769 [2024-07-25 17:23:30.189573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:37.769 [2024-07-25 17:23:30.189587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:37.769 [2024-07-25 17:23:30.189598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:37.769 [2024-07-25 17:23:30.189607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.272675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.272741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:38.027 [2024-07-25 17:23:30.272775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.027 [2024-07-25 17:23:30.272793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.343527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.343572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:38.027 [2024-07-25 17:23:30.343604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.027 [2024-07-25 17:23:30.343616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.343725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.343774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:38.027 [2024-07-25 17:23:30.343787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.027 [2024-07-25 17:23:30.343797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.343855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.343879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:38.027 [2024-07-25 17:23:30.343891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.027 [2024-07-25 17:23:30.343901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.344010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.344047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:38.027 [2024-07-25 17:23:30.344062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.027 [2024-07-25 17:23:30.344072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.027 [2024-07-25 17:23:30.344124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.027 [2024-07-25 17:23:30.344147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:38.028 [2024-07-25 17:23:30.344159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.028 [2024-07-25 17:23:30.344168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.028 [2024-07-25 17:23:30.344245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.028 [2024-07-25 17:23:30.344259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:38.028 [2024-07-25 17:23:30.344270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.028 [2024-07-25 17:23:30.344281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.028 [2024-07-25 17:23:30.344334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:38.028 [2024-07-25 17:23:30.344357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:38.028 [2024-07-25 17:23:30.344369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:38.028 [2024-07-25 17:23:30.344380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:38.028 [2024-07-25 17:23:30.344531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 284.987 ms, result 0 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:38.961 Remove shared memory files 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85876 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:38.961 ************************************ 00:32:38.961 END TEST ftl_upgrade_shutdown 00:32:38.961 ************************************ 00:32:38.961 00:32:38.961 real 1m31.038s 00:32:38.961 user 2m8.912s 00:32:38.961 sys 0m23.450s 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.961 17:23:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@14 -- # killprocess 78165 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@950 -- # '[' -z 78165 ']' 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@954 -- # kill -0 78165 00:32:39.219 Process with pid 78165 is not found 00:32:39.219 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78165) - No such process 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 78165 is not found' 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86324 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86324 00:32:39.219 17:23:31 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@831 -- # '[' -z 86324 ']' 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.219 17:23:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:39.220 [2024-07-25 17:23:31.536687] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:39.220 [2024-07-25 17:23:31.536851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86324 ] 00:32:39.481 [2024-07-25 17:23:31.691226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.481 [2024-07-25 17:23:31.898598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.416 17:23:32 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.416 17:23:32 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:40.416 17:23:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:40.416 nvme0n1 00:32:40.416 17:23:32 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:40.416 17:23:32 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:40.416 17:23:32 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:40.674 17:23:33 ftl -- ftl/common.sh@28 -- # stores=e17407a3-0b67-4016-8946-17bfa8dbfc3c 00:32:40.674 17:23:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:40.674 17:23:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e17407a3-0b67-4016-8946-17bfa8dbfc3c 00:32:40.933 17:23:33 ftl -- ftl/ftl.sh@23 -- # killprocess 86324 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 86324 ']' 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@954 -- # kill -0 86324 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@955 -- # uname 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86324 00:32:40.933 killing process with pid 86324 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86324' 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@969 -- # kill 86324 00:32:40.933 17:23:33 ftl -- common/autotest_common.sh@974 -- # wait 86324 00:32:42.835 17:23:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:43.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:43.093 Waiting for block devices as requested 00:32:43.093 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:43.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:43.350 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:43.350 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:48.621 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:48.621 17:23:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:48.621 Remove shared memory files 00:32:48.621 17:23:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:48.621 17:23:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:48.621 17:23:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:48.621 17:23:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:48.621 17:23:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:48.621 17:23:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:48.621 ************************************ 00:32:48.621 END TEST ftl 00:32:48.621 ************************************ 00:32:48.621 00:32:48.622 real 12m25.400s 00:32:48.622 user 15m28.902s 00:32:48.622 sys 1m34.925s 00:32:48.622 17:23:40 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.622 17:23:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:48.622 17:23:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:48.622 17:23:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:48.622 17:23:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:48.622 17:23:40 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:48.622 17:23:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:48.622 17:23:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:48.622 17:23:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:48.622 17:23:40 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:48.622 17:23:40 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:48.622 17:23:40 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:48.622 17:23:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.622 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:32:48.622 17:23:40 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:48.622 17:23:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:48.622 17:23:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:48.622 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:32:49.997 INFO: APP EXITING 00:32:49.997 INFO: killing all VMs 00:32:49.997 INFO: killing vhost app 00:32:49.997 INFO: EXIT DONE 00:32:50.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:50.822 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:50.822 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:50.822 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:50.822 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:51.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:51.651 Cleaning 00:32:51.651 Removing: /var/run/dpdk/spdk0/config 00:32:51.651 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:51.651 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:51.651 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:51.651 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:51.651 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:51.651 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:51.651 Removing: /var/run/dpdk/spdk0 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62090 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62306 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62533 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62637 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62693 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62826 00:32:51.651 Removing: /var/run/dpdk/spdk_pid62850 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63036 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63142 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63246 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63360 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63458 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63503 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63545 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63613 00:32:51.651 Removing: /var/run/dpdk/spdk_pid63730 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64199 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64279 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64359 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64375 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64534 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64561 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64715 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64742 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64807 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64836 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64900 00:32:51.651 Removing: /var/run/dpdk/spdk_pid64929 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65116 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65158 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65239 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65412 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65507 00:32:51.651 Removing: /var/run/dpdk/spdk_pid65549 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66032 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66130 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66245 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66309 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66340 00:32:51.651 Removing: /var/run/dpdk/spdk_pid66416 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67048 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67095 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67610 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67714 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67834 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67898 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67929 00:32:51.651 Removing: /var/run/dpdk/spdk_pid67959 00:32:51.651 Removing: /var/run/dpdk/spdk_pid69823 00:32:51.651 Removing: /var/run/dpdk/spdk_pid69969 00:32:51.651 Removing: /var/run/dpdk/spdk_pid69978 00:32:51.651 Removing: /var/run/dpdk/spdk_pid69990 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70039 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70043 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70055 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70100 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70104 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70116 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70163 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70167 00:32:51.651 Removing: /var/run/dpdk/spdk_pid70179 00:32:51.651 Removing: /var/run/dpdk/spdk_pid71532 00:32:51.651 Removing: /var/run/dpdk/spdk_pid71632 00:32:51.651 Removing: /var/run/dpdk/spdk_pid73039 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74382 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74498 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74607 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74719 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74846 00:32:51.651 Removing: /var/run/dpdk/spdk_pid74926 00:32:51.651 Removing: /var/run/dpdk/spdk_pid75060 00:32:51.909 Removing: /var/run/dpdk/spdk_pid75430 00:32:51.909 Removing: /var/run/dpdk/spdk_pid75466 00:32:51.909 Removing: /var/run/dpdk/spdk_pid75939 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76121 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76226 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76337 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76396 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76427 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76716 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76775 00:32:51.909 Removing: /var/run/dpdk/spdk_pid76854 00:32:51.909 Removing: /var/run/dpdk/spdk_pid77238 00:32:51.909 Removing: /var/run/dpdk/spdk_pid77383 00:32:51.909 Removing: /var/run/dpdk/spdk_pid78165 00:32:51.909 Removing: /var/run/dpdk/spdk_pid78295 00:32:51.909 Removing: /var/run/dpdk/spdk_pid78480 00:32:51.909 Removing: /var/run/dpdk/spdk_pid78589 00:32:51.909 Removing: /var/run/dpdk/spdk_pid78931 00:32:51.909 Removing: /var/run/dpdk/spdk_pid79206 00:32:51.909 Removing: /var/run/dpdk/spdk_pid79568 00:32:51.909 Removing: /var/run/dpdk/spdk_pid79758 00:32:51.909 Removing: /var/run/dpdk/spdk_pid79905 00:32:51.909 Removing: /var/run/dpdk/spdk_pid79969 00:32:51.909 Removing: /var/run/dpdk/spdk_pid80128 00:32:51.909 Removing: /var/run/dpdk/spdk_pid80153 00:32:51.909 Removing: /var/run/dpdk/spdk_pid80217 00:32:51.909 Removing: /var/run/dpdk/spdk_pid80420 00:32:51.909 Removing: /var/run/dpdk/spdk_pid80653 00:32:51.909 Removing: /var/run/dpdk/spdk_pid81143 00:32:51.909 Removing: /var/run/dpdk/spdk_pid81636 00:32:51.909 Removing: /var/run/dpdk/spdk_pid82133 00:32:51.909 Removing: /var/run/dpdk/spdk_pid82671 00:32:51.909 Removing: /var/run/dpdk/spdk_pid82819 00:32:51.909 Removing: /var/run/dpdk/spdk_pid82916 00:32:51.909 Removing: /var/run/dpdk/spdk_pid83772 00:32:51.909 Removing: /var/run/dpdk/spdk_pid83847 00:32:51.909 Removing: /var/run/dpdk/spdk_pid84324 00:32:51.909 Removing: /var/run/dpdk/spdk_pid84752 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85292 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85416 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85469 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85539 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85599 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85671 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85876 00:32:51.909 Removing: /var/run/dpdk/spdk_pid85945 00:32:51.909 Removing: /var/run/dpdk/spdk_pid86018 00:32:51.909 Removing: /var/run/dpdk/spdk_pid86096 00:32:51.909 Removing: /var/run/dpdk/spdk_pid86131 00:32:51.909 Removing: /var/run/dpdk/spdk_pid86198 00:32:51.909 Removing: /var/run/dpdk/spdk_pid86324 00:32:51.909 Clean 00:32:51.909 17:23:44 -- common/autotest_common.sh@1451 -- # return 0 00:32:51.909 17:23:44 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:51.909 17:23:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.909 17:23:44 -- common/autotest_common.sh@10 -- # set +x 00:32:51.909 17:23:44 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:51.909 17:23:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.909 17:23:44 -- common/autotest_common.sh@10 -- # set +x 00:32:52.167 17:23:44 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:52.167 17:23:44 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:52.167 17:23:44 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:52.167 17:23:44 -- spdk/autotest.sh@395 -- # hash lcov 00:32:52.167 17:23:44 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:52.167 17:23:44 -- spdk/autotest.sh@397 -- # hostname 00:32:52.167 17:23:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:52.167 geninfo: WARNING: invalid characters removed from testname! 00:33:18.766 17:24:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:18.767 17:24:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:20.142 17:24:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:22.675 17:24:14 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:25.208 17:24:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:27.111 17:24:19 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:29.645 17:24:21 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:29.645 17:24:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:29.645 17:24:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:29.645 17:24:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.645 17:24:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.645 17:24:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.645 17:24:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.645 17:24:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.645 17:24:21 -- paths/export.sh@5 -- $ export PATH 00:33:29.645 17:24:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.645 17:24:21 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:29.645 17:24:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:29.645 17:24:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721928261.XXXXXX 00:33:29.645 17:24:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721928261.ouQUSY 00:33:29.645 17:24:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:29.645 17:24:21 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:29.645 17:24:21 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:29.645 17:24:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:29.645 17:24:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:29.645 17:24:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:29.645 17:24:21 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:29.645 17:24:21 -- common/autotest_common.sh@10 -- $ set +x 00:33:29.645 17:24:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:29.645 17:24:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:29.645 17:24:21 -- pm/common@17 -- $ local monitor 00:33:29.645 17:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.645 17:24:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.645 17:24:21 -- pm/common@25 -- $ sleep 1 00:33:29.645 17:24:21 -- pm/common@21 -- $ date +%s 00:33:29.645 17:24:21 -- pm/common@21 -- $ date +%s 00:33:29.645 17:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721928261 00:33:29.645 17:24:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721928261 00:33:29.645 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721928261_collect-vmstat.pm.log 00:33:29.645 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721928261_collect-cpu-load.pm.log 00:33:30.580 17:24:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:30.580 17:24:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:30.580 17:24:22 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:30.580 17:24:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:30.580 17:24:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:30.580 17:24:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:30.580 17:24:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:30.580 17:24:22 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:30.580 17:24:22 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:30.580 17:24:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:30.580 17:24:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:30.580 17:24:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:30.580 17:24:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:30.580 17:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.580 17:24:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:30.580 17:24:23 -- pm/common@44 -- $ pid=87996 00:33:30.580 17:24:23 -- pm/common@50 -- $ kill -TERM 87996 00:33:30.580 17:24:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.580 17:24:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:30.580 17:24:23 -- pm/common@44 -- $ pid=87998 00:33:30.580 17:24:23 -- pm/common@50 -- $ kill -TERM 87998 00:33:30.580 + [[ -n 5205 ]] 00:33:30.580 + sudo kill 5205 00:33:30.590 [Pipeline] } 00:33:30.609 [Pipeline] // timeout 00:33:30.616 [Pipeline] } 00:33:30.634 [Pipeline] // stage 00:33:30.640 [Pipeline] } 00:33:30.663 [Pipeline] // catchError 00:33:30.673 [Pipeline] stage 00:33:30.676 [Pipeline] { (Stop VM) 00:33:30.691 [Pipeline] sh 00:33:30.970 + vagrant halt 00:33:34.252 ==> default: Halting domain... 00:33:39.525 [Pipeline] sh 00:33:39.803 + vagrant destroy -f 00:33:43.083 ==> default: Removing domain... 00:33:43.355 [Pipeline] sh 00:33:43.635 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:43.643 [Pipeline] } 00:33:43.661 [Pipeline] // stage 00:33:43.667 [Pipeline] } 00:33:43.686 [Pipeline] // dir 00:33:43.692 [Pipeline] } 00:33:43.710 [Pipeline] // wrap 00:33:43.717 [Pipeline] } 00:33:43.734 [Pipeline] // catchError 00:33:43.743 [Pipeline] stage 00:33:43.745 [Pipeline] { (Epilogue) 00:33:43.760 [Pipeline] sh 00:33:44.040 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:49.342 [Pipeline] catchError 00:33:49.345 [Pipeline] { 00:33:49.361 [Pipeline] sh 00:33:49.642 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:49.900 Artifacts sizes are good 00:33:49.908 [Pipeline] } 00:33:49.927 [Pipeline] // catchError 00:33:49.939 [Pipeline] archiveArtifacts 00:33:49.957 Archiving artifacts 00:33:50.077 [Pipeline] cleanWs 00:33:50.088 [WS-CLEANUP] Deleting project workspace... 00:33:50.088 [WS-CLEANUP] Deferred wipeout is used... 00:33:50.095 [WS-CLEANUP] done 00:33:50.097 [Pipeline] } 00:33:50.115 [Pipeline] // stage 00:33:50.121 [Pipeline] } 00:33:50.139 [Pipeline] // node 00:33:50.146 [Pipeline] End of Pipeline 00:33:50.194 Finished: SUCCESS